The OASIS Coupler Forum

  HOME

OASIS principles

Up to Specific issues in real coupled models

Posted by Anonymous at June 15 2018

Dear Oasis users,

I am a totally newbie to this coupler so before I start to implement to my case (coupling regional climate model to air quality model) I would like to have some questions regarding the principles on which OASIS works.

Let's say I have two models represented by two executables running on "m" and "n" CPU units, where m and n are different. Let's assume, for the simplicity (but actually this will be most often my case) that the grids are same for both models, so no interpolation is needed while exchanging the data.

My questions:

1) if n and m are different, the computational domain is split into pieces differently in the two models(executables). How in this case OASIS ensures that the right portion of the data is passed to the receiving model? Or it collects first all the pieces from all the MPI threads from one model, and passes (no interpolation needed) to the other model as whole and it is the MPI what distributes the data to the individual threads of the other model? Or it is OASIS who ensures the data distribution among CPU threads?

2) As I said, in my case, no interpolation will be needed, as the domains match exactly. Are there steps that can be skipped in this case while setting up oasis and modifiing the individual model's code? In fact, if no interpolation is done, than no information is needed about the lat lon coordinates of the grid points at all. How OASIS deals with this special case?

Thanks in advance for any clarification

Peter

Posted by Anonymous at June 16 2018

Hi Peter, 

1) In an pure MPI coupled model, OASIS (which is a library linked to the models) manages both the decomposition of the models over the MPI processes and the parallel remapping between different grids. If you have the same grids with different decompositions for your two coupled models, you will not perform any remapping and it will be specify in the namcouple, the configuration file of OASIS, but OASIS will still manage the parallel exchanges of data between the models.

You can read the paper about OASIS3-MCT (https://oasis.cerfacs.fr/wp-content/uploads/sites/114/2021/08/GLOBC_ARTICLE_OASIS3-MCT_gmd-10-3297-2017.pdf) to have an overview of the coupler.

The decomposition of the grid is defined locally on each MPI process of each model by calling the routine oasis_def_partition:

CALL oasis_def_partition (il_part_id, ig_paral, ierror, isize, name)

The partition is usually apple (1D decomposition) or box (2D decomposition) (see the User Guide https://oasis.cerfacs.fr/wp-content/uploads/sites/114/2021/02/GLOBC_TR_oasis3mct_UserGuide_4.0.pdf, section 2.2.3).

Then, the patterns of decomposition are transferred to all the MPI processes during the end definition phase (enddef) and when the coupling fields are exchanged, this is done in parallel following this pattern.

2) You do not need to have grids information or remapping files information, except the dimensions of the grids to define the partition and there will be no transformations linked to remapping defined in the namcouple, except if you want to use the option CONSERV to redistribute energy fluxes after coupling exchanges (see the User Guide https://oasis.cerfacs.fr/wp-content/uploads/sites/114/2021/02/GLOBC_TR_oasis3mct_UserGuide_4.0.pdf, section 4.4).

OASIS3-MCT_4.0 should be released very soon so I think it would be better that you download this new version as you are new to this coupler. 

Just few questions : which models do you want to couple ? On which platform di you run ? Do your models use OpenMP or just MPI ?

 

Best regards, Laure

Posted by Anonymous at June 19 2018

Dear Laure,

Thank you for the detailed answer. I am using the regional climate model RegCM4 (https://gforge.ictp.it/gf/project/regcm/) and the chemical transport model CAMxv6 (www.camx.com). Both support Open MPI and Intel MPI paralellization. I am running on x86_64 with 24 CPU-s, for tests.

Best regards, Peter

Posted by Anonymous at June 20 2018

Hi Peter,

Few more answers below to complete what Laure already told you.

OASIS3-MCT_4.0 is multi-threaded with OpenMP but just in the initialisation phase for the calculation of the interpolation weights and addresses; so you are not concerned with this OpenMP part of OASIS3-MCT_4.0.

For the coupling exchanges, OASIS3-MCT_4.0 is fully MPI parallel: each process can declare its part of the coupling field (with the oasis_def_partition, as Laure told you) and can send and receive its part of the coupling field. OASIS3-MCT_4.0 will take care of the rearrangement needed when the decompositions of the two models are different (and of the regridding/interpolation when the grids are not the same, which is not your case). So all processes can call the oasis_put (for example), there is no need to restrict this to the master MPI process.

If your code is also OpenMP multi-threaded, it is much simpler to call the oasis_put outside a threaded region (that’s what most users do) but you can also call it in a threaded region but only through the master OpenMP thread: this is what is explained on the forum with (see the forum discussion https://www.cerfacs.fr/site-oasis/forum/oa_main.php?c=119) :

"Then in the OpenMP part of the code, only the master thread of each MPI task calls the OASIS3-MCT routines (oasis_init_comp, oasis_def_partition, oasis_def_var, oasis_enddef, oasis_put, oasis_get, …) for the part of the coupling field treated by the whole MPI task (not only for the part of the coupling field treated by the thread itself). An OMP scatter is done after an Oasis_recv and an OMP gather is done before an Oasis_send. And this works fine."

So there is no need to restrict the coupling exchanges to the master MPI process, but you have to restrict them to the master OpenMP thread.

Best regards, Sophie
Reply to this