Feature #10231: ERA5 meteo
fast track ERA5 pre-processing
Testing / Processing ERA5 meteo at 1x1 / 137layers / 3hourly back to 1990.
#2 Updated by Philippe Le Sager 6 months ago
- % Done changed from 0 to 50
- 46 minutes with ERA-Interim
- 54 minutes with ERA-5
but these are highly variable on my system (even more with short runs due the extra weight given to IO). Longer runs needed for more useful numbers.
I will start a benchmark run.
#5 Updated by Philippe Le Sager 6 months ago
Philippe Le Sager wrote:
...you added a "restart.ignore" key. What is it for? I cannot run two or more chunks in a row anymore, I wonder if this is the source of the problem.
Ok, found it. That new key defaults to
True! Just need to make sure it defaults to
False, since it is already set to T for ERA5 preprocessing.
After closer look, this new key looks like a quick way of avoiding writing save file (in case it is not compiled with HDF4) when
restart.write : FCorrect?
This is ok for now, but we'll have to rethink the all logic around restarts. Note that option 32 for
istart provides now all the functionality of a save file but with a restart file: remapping and missing tracers are handled.
#7 Updated by Philippe Le Sager 6 months ago
- % Done changed from 50 to 70
Spinup + benchmark runs with full chemistry (CB05+M7) have finished. First look at the performance:
- 1y-spinup was 15% slower with ERA5 (spinup has no output)
- 1y-benchmark was 9% slower with ERA5 (benchmarks have a lot of output)
A closer look at the profiling output shows differences in reading the met fields (in seconds, for one month run):
|tmm readfield 2D||608.29||147.89||more data: every 1h for ERA5, 3 or 6h for ERA-I|
|tmm readfield 3D||1060.52||887.86||same amount of data, but internally compressed in case of ERA5|
#8 Updated by Philippe Le Sager 6 months ago
- File comp_with_observations_overall.pdf added
- File WINTER_2006.pdf added
- % Done changed from 70 to 40
The benchmark comparison between the two models for 2006 is available. While most of the comparisons look OK, there is a couple of serious issues that need to be resolved. Have a look at the O3 sondes comparisons in the comp_with_observations_overall.pdf (pages 9-14). And why is the solar zenith angle different between the two runs (see p.26 of WINTER_2006.pdf)?
#10 Updated by Philippe Le Sager 6 months ago
Yes indeed, SZA is weighted by the actual timestep:
phot_dat(region)%sza_av = phot_dat(region)%sza_av + float(ndyn)/float(ndyn_max) * sza phot_dat(region)%nalb_av = phot_dat(region)%nalb_av + float(ndyn)/float(ndyn_max)
sza_av/nalb_av is written to file. Ok the differences are quite small, so it make sense that is due to the different time step. Quite surprised by the pattern...probably logical though.
#11 Updated by Philippe Le Sager 6 months ago
I still haven't find anything in the code what could explain the difference between the EI and EA. In the entire CB05 chemistry project nothing is different between the two cases when running with a subset of 34 levels, since I do not use MSR or O3DU options (both options require reading/remapping MSR data).
I run the model on 34 levels on both cases (as Henk did before with OD data, which are also provided on 137 levels). The 34 levels are slightly different in the two cases, but not by much since I followed this correspondence.
So I decided to examine the met fields used by writing them out, and will compare them when time allows.
#12 Updated by Philippe Le Sager 6 months ago
- File O3_sondes_comp.pdf added
Arjo pointed out to me that the 34 levels of ERA-Interim and of ERA5 are not the same. Looking closely at the benchmark code that plots the ozone profiles at various stations, I came across this:
; DISCLAIMER ; currently works only for 34 out of 60 levelsThe A's and B's pressure coefficients were hardcoded. Since there are differences between the 34 levels of ERA-Interim and those of ERA-5, it explains the problem in the O3 plots. I've modified the code to read the hybrid coefficients from auxiliary files, so that it can compare runs with different levels (including different number of levels). You can look at the updated comparison in the O3_sondes_comp.pdf file. Most of the time ERA-Interim and ERA5 give the same results. There are very few occasions where ERA5 gives better results, like over Hong Kong in JJA particularly.
#13 Updated by Philippe Le Sager 6 months ago
- File SUMMER_2006.pdf added
- % Done changed from 40 to 60
The annual cycle of aggregated NO2 columns are compared to OMI retrievals in the last page of the comp_with_observations_overall.pdf. Large differences are found in some regions (Tropics, Africa, and South America), while the north hemisphere shows little changes from ERA-I to ERA5. The seasonal means (p.159-160 of WINTER_2006.pdf and SUMMER_2006.pdf), particularly the summer difference at 500 hPa, also show the larger NO2 South Hemisphere with ERA5.
#14 Updated by Philippe Le Sager 6 months ago
- File eraI-vs-era5-comp_with_observations_overall.pdf added
- File eraI-vs-era5-comp_with_observations_satellite.pdf added
- File eraI-vs-era5-SUMMER_2006.pdf added
- File eraI-vs-era5-WINTER_2006.pdf added
- % Done changed from 60 to 80
I've rerun the ERA5 benchmark with a different selection of 34 levels from the 137 original ones. This new selection was proposed by Arjo and gives levels that are closer to the selected one for ERA-Interim. The results are quite similar to those posted so far. I have attached these final results: eraI-vs-era5-WINTER_2006.pdf, eraI-vs-era5-SUMMER_2006.pdf, eraI-vs-era5-comp_with_observations_satellite.pdf, and eraI-vs-era5-comp_with_observations_overall.pdf.
Let us know if you have any concern or comment. If not, the code will be merged into the trunk.