Project

General

Profile

Feature #10241

Feature #10231: ERA5 meteo

fast track ERA5 pre-processing

Added by Philippe Le Sager about 1 year ago. Updated about 2 months ago.

Status:
In Progress
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
07/02/2018
Due date:
% Done:

80%


Description

Testing / Processing ERA5 meteo at 1x1 / 137layers / 3hourly back to 1990.

comp_with_observations_overall.pdf (347 KB) Philippe Le Sager, 06/14/2019 10:20 AM

WINTER_2006.pdf (45.3 MB) Philippe Le Sager, 06/14/2019 10:27 AM

O3_sondes_comp.pdf (138 KB) Philippe Le Sager, 06/24/2019 03:56 PM

SUMMER_2006.pdf (45.4 MB) Philippe Le Sager, 06/25/2019 08:57 AM

eraI-vs-era5-comp_with_observations_overall.pdf (355 KB) Philippe Le Sager, 06/26/2019 04:31 PM

eraI-vs-era5-comp_with_observations_satellite.pdf (6.25 MB) Philippe Le Sager, 06/26/2019 04:31 PM

eraI-vs-era5-SUMMER_2006.pdf (45.4 MB) Philippe Le Sager, 06/26/2019 04:31 PM

eraI-vs-era5-WINTER_2006.pdf (45.3 MB) Philippe Le Sager, 06/26/2019 04:31 PM

eraI-vs-era5-budget_comparison_2006.txt View (22.3 KB) Philippe Le Sager, 06/27/2019 07:23 AM

History

#1 Updated by Arjo Segers 3 months ago

  • Status changed from New to In Progress

Processed L137 on 1x1 for 1989-2018.
Data available in:
/nlh/TM/meteo-nc/ec/ea/
To be tested in TM simulation with full chemistry.

#2 Updated by Philippe Le Sager 3 months ago

  • % Done changed from 0 to 50
I made few changes to get the code working (see r1019 to r1023). I am able to run the full chemistry (CB05+M7) with both ERA-Interim and ERA5 with the ERA-5 branch. Using ERA-Interim I have the same results as in the trunk (good, nothing is broken). Walltime for the step run for a 10-day simulation is:
  • 46 minutes with ERA-Interim
  • 54 minutes with ERA-5

but these are highly variable on my system (even more with short runs due the extra weight given to IO). Longer runs needed for more useful numbers.

I will start a benchmark run.

#3 Updated by Philippe Le Sager 3 months ago

Would it be possible to also retrieve and process the wind speed at 10m instead of (or in addition to) u10m and v10m? We use the latter only to computer the former (see issue #11771).

#4 Updated by Philippe Le Sager 3 months ago

Arjo, I saw you added a "restart.ignore" key. What is it for? I cannot run two or more chunks in a row anymore, I wonder if this is the source of the problem.

#5 Updated by Philippe Le Sager 3 months ago

Philippe Le Sager wrote:

...you added a "restart.ignore" key. What is it for? I cannot run two or more chunks in a row anymore, I wonder if this is the source of the problem.

Ok, found it. That new key defaults to True! Just need to make sure it defaults to False, since it is already set to T for ERA5 preprocessing.

After closer look, this new key looks like a quick way of avoiding writing save file (in case it is not compiled with HDF4) when

restart.write : F
Correct?

This is ok for now, but we'll have to rethink the all logic around restarts. Note that option 32 for istart provides now all the functionality of a save file but with a restart file: remapping and missing tracers are handled.

#6 Updated by Arjo Segers 3 months ago

Think it was only introduced because I could not get the hdf lib working correctly ..

#7 Updated by Philippe Le Sager 2 months ago

  • % Done changed from 50 to 70

Spinup + benchmark runs with full chemistry (CB05+M7) have finished. First look at the performance:

  • 1y-spinup was 15% slower with ERA5 (spinup has no output)
  • 1y-benchmark was 9% slower with ERA5 (benchmarks have a lot of output)

A closer look at the profiling output shows differences in reading the met fields (in seconds, for one month run):

ERA5 ERA-In. Explanation
tmm readfield 2D 608.29 147.89 more data: every 1h for ERA5, 3 or 6h for ERA-I
tmm readfield 3D 1060.52 887.86 same amount of data, but internally compressed in case of ERA5

#8 Updated by Philippe Le Sager 2 months ago

The benchmark comparison between the two models for 2006 is available. While most of the comparisons look OK, there is a couple of serious issues that need to be resolved. Have a look at the O3 sondes comparisons in the comp_with_observations_overall.pdf (pages 9-14). And why is the solar zenith angle different between the two runs (see p.26 of WINTER_2006.pdf)?

#9 Updated by Arjo Segers 2 months ago

How exactly was sza written out? The internal timestepping in the model might be different, if sza is written out as average over all timesteps instead of instant fields at regular times then the result might be different between ei and ea.

#10 Updated by Philippe Le Sager 2 months ago

Yes indeed, SZA is weighted by the actual timestep:

phot_dat(region)%sza_av  = phot_dat(region)%sza_av  + float(ndyn)/float(ndyn_max) * sza
phot_dat(region)%nalb_av = phot_dat(region)%nalb_av + float(ndyn)/float(ndyn_max)

and sza_av/nalb_av is written to file. Ok the differences are quite small, so it make sense that is due to the different time step. Quite surprised by the pattern...probably logical though.

#11 Updated by Philippe Le Sager 2 months ago

I still haven't find anything in the code what could explain the difference between the EI and EA. In the entire CB05 chemistry project nothing is different between the two cases when running with a subset of 34 levels, since I do not use MSR or O3DU options (both options require reading/remapping MSR data).

I run the model on 34 levels on both cases (as Henk did before with OD data, which are also provided on 137 levels). The 34 levels are slightly different in the two cases, but not by much since I followed this correspondence.

So I decided to examine the met fields used by writing them out, and will compare them when time allows.

#12 Updated by Philippe Le Sager about 2 months ago

Arjo pointed out to me that the 34 levels of ERA-Interim and of ERA5 are not the same. Looking closely at the benchmark code that plots the ozone profiles at various stations, I came across this:

; DISCLAIMER
; currently works only for 34 out of 60 levels
The A's and B's pressure coefficients were hardcoded. Since there are differences between the 34 levels of ERA-Interim and those of ERA-5, it explains the problem in the O3 plots. I've modified the code to read the hybrid coefficients from auxiliary files, so that it can compare runs with different levels (including different number of levels). You can look at the updated comparison in the O3_sondes_comp.pdf file. Most of the time ERA-Interim and ERA5 give the same results. There are very few occasions where ERA5 gives better results, like over Hong Kong in JJA particularly.

#13 Updated by Philippe Le Sager about 2 months ago

The annual cycle of aggregated NO2 columns are compared to OMI retrievals in the last page of the comp_with_observations_overall.pdf. Large differences are found in some regions (Tropics, Africa, and South America), while the north hemisphere shows little changes from ERA-I to ERA5. The seasonal means (p.159-160 of WINTER_2006.pdf and SUMMER_2006.pdf), particularly the summer difference at 500 hPa, also show the larger NO2 South Hemisphere with ERA5.

#14 Updated by Philippe Le Sager about 2 months ago

I've rerun the ERA5 benchmark with a different selection of 34 levels from the 137 original ones. This new selection was proposed by Arjo and gives levels that are closer to the selected one for ERA-Interim. The results are quite similar to those posted so far. I have attached these final results: eraI-vs-era5-WINTER_2006.pdf, eraI-vs-era5-SUMMER_2006.pdf, eraI-vs-era5-comp_with_observations_satellite.pdf, and eraI-vs-era5-comp_with_observations_overall.pdf.

Let us know if you have any concern or comment. If not, the code will be merged into the trunk.

Also available in: Atom PDF