From NEC-CCRL, Rene Redler, Hubert Ritzdorf, Guntram Berti; from NEC
Deutschland, Thomas Schoenemeyer; from SGI Deutschland, Reiner
Vogelsang; from CERFACS, Damien Declat and Sophie Valcke.
- Status of Oasis 3.0
Sophie presented the main features of Oasis 3.0, for which a beta
version is now available. These features are:
- New PRISM System model interface (PSMILe V.0)
- Using MPI1 or MPI2 and conforming as much as possible with final PRISM coupler interface.
- Allowing direct communication between models with same grid and
partitioning
- Including increased modularity: prism_put and prism_get may be called
by the model at each time step; exchange is performed or not by the PSMILe,
depending on user's specifications in namcouple.
- Automatic time integration by PSMILe depending on user's specification
- I/O and combined I/O & coupling functionality
- New interpolations / interfacing with SCRIP library:
- 1st and 2nd order conservative remapping for all grids
- Bilinear and bicubic interpolation for "logically-rectangular" grids
- Bilinear and bicubic interpolation for reduced
atmospheric grids
- F90 rewriting (dynamic memory allocation)
- NetCDF format for grid and restart auxiliary files
The development of Oasis will continue in parallel to the development
of the "final" PRISM coupler in order to answer the needs of the
demonstration runs for which it will be used.
The following developments are already identified:
- Parallelisation with OpenMP (Thomas)
- Grid memory management: removal of grid duplications
- In PSMILe V.0, an additional primitive to explicitly write the
coupling restart file (prism_put_restart)
- Inclusion of a date in coupling restart files
- Proper vector interpolation for source grid covering a pole
- WP3a/WP4a EGS poster
Rene's first poster proposal for EGS was discussed. Rene will send
this proposition to Sophie, who will modify it, according to what was
discussed. It was decided to:
- Reserve an upper band for the title and authors. Reserve a lower
band for calendar of developments, institutions, e-mail contact
(valcke@cerfacs.fr), PRISM web site address, acknowledgement to the
EC. Separate the rest of the page in 3X4 A4 landscape sheets.
- Include a short introduction describing the WP3a/WP4a objectives.
- Simplify the overall system description, by removing the SMIOC and
SCC containers, and make it more explicit, by writing real component
names (ocean, atmos, etc.) instead of Mi, Mj, and by writing
"Transformer" instead of T, etc.
- Add a separate section on the Configuration procedure (PMIOD, SMIOC, SCC)
- Add a separate section on the Driver functionality.
- Add a separate section on the Transformer functionality.
- Keep and extent the programming example (2 X A4 sheets)
- Keep and modify the Communication graph (direct communication between ocean and
ocean-biogeochemistry, I/O from/to a file should be added, etc)
- Discussion on Alexandre's document on PMIOD and Philippe's tool
for SMIOC/SCC access
- All people present agreed to propose that the PMIOD/SMIOC/SCC
responsibility should be transferred from IPSL to a Working Group,
composed of WP3a, WP4b and WP4a people. This will be proposed at the
next SSW meeting. It was clear that Philippe Bourcier who is in charge
of developing the tools to access the SMIOC and SCC content should be
part of this working group.
- Philippe's SMIOC and SCC access tools are based on C routines using
libxml. These routines parse the XML files and build F90 hashtables.
Philippe will also develop the F90 API so that F90 routines can
access the hashtable content.
- In the non-standalone case, two options are possible to transfer the
SMIOC and SCC information to the component PSMILe. In the first option,
the Driver initially uses the F90 API to access the part of the SMIOCs
and SCC information that is needed initially and initialise its F90
structure with this information. The Driver then transfers the
appropriate information to the different model PSMILes. In case some
additional information is needed by a model PSMILe during the run
(e.g. for IO purposes), this information could be requested to the
Driver that could then read the information in the model SMIOC using
the F90 API and give it to the model. The second option is that the
Driver transfers initially the whole XML tree information to each
model PSMILe.
- In a stand-alone mode, each model PSMILe will, instead of receiving
the appropriate information from the Driver, directly use the F90 API
to get the information in its SMIOC and in the SCC (either all the
information initially, either only appropriate information when
needed).
- We should make sure that the WP4b GUI uses appropriate SCC information
to select automatically the machine-dependent launching command (mpirun
or equivalent).
- Discussions on PRISM2
Sophie presented an overview of the projects that will probably be
proposed to the EC FP6. On the scientific side, ENSEMBLES
gathers what was previously called IMPRESS (global change
studies, Dave Griggs from the MetOffice), EURIPIDES (seasonal
forecasting, Tim Palmer) and GENIES (regionalisation studies). On the
infrastructure aspects, there is DEISA (a grid of supercomputer
centres, lead by Victor Alessandrini from IDRIS), and the follow-up of
PRISM, called PRISM2 or CAPRI, lead by Eric Guilyardi, which will
include further development of the PRISM System, improved access to
this system which will include data aspects.
One Joint Research Project should be devoted to the development of
Software Tools for Earth System Models. Regarding the PRISM coupler,
the following aspects, that will probably not be finalised within PRISM,
could be proposed:
- Optimisation of the treatment of time changing grids
- 3D interpolation
- Parallel I/O issue
- Test in new configurations, and user's support in general
- General improvement of the performances
In a more aggressive approach, we could also propose to address
dynamical coupling and development of a unified tool for Data
Assimilation and Climate Coupling.
Finally, integration of coding tools to help the modellers to develop
their code, such as the ones developed in ESMF could also be proposed.
- Discussion on Damien's document on Transformations
The following points were clarified:
- 2.3 and 3.3, the halo issue: The two following possibilities should be
offered: a) an exact but more expansive search of the neighbours,
involving a local search in each source partition and then a global
comparison of the local results (to be done by the PSMILe), or b) an
approximate search by considering for each target point only the
partition into which it falls and its local halo. The choice should
be made by the user and indicated in the SCC.
- 4.2, the POLE or NOPOLE issue: The indication on whether or not the
local partition covers the pole should not be transferred through
the PSMILE interface, but can automatically be calculated by the
Transformer itself, when it receives the corners of the source
meshes used in the interpolation. If one the source mesh covers the
pole then projection on a Cartesian plane must be performed to
insure that the zonal component is correctly taken into account.
- 6.4, the combination issues: Combination of coupling fields may take
place in the source model PSMILe, in the Transformer, or in the
target model PSMILe, depending on the origin of the fields (same or
different source models), and on the need or not of
interpolation. The configuration will be analysed by the PSMILe and
its decision on where to perform the combination will be transferred
initially (or during the run if the configuration changes during the
run) to the Transformer.
- 7.0 and 7.1, scattering and gathering issue: A field that needs
scattering or gathering will have a structure different for the
structure of the associated grid; this needs to be treated
explicitly in the PSMILe interface. It was therefore decided to add
an additional argument to the prism_def_var primitive defining the
scattering/gathering operation and the convention used. The
scattering/gathering will then be automatically performed below the
PSMILe using the associated mask and the convention. In a first
step, only the "NO_SCATTER" will be supported.
- 9. Collapse: a better name might be "Reduction". Regarding 9.4, it was
decided that if the distribution is organized over one dimension
and if the reduction is on the other dimension, then the PSMILe
performs locally the local reduction, and also gathers the local
results and performs the global reduction.
- 10.2 Transfer of information between the PSMILe and the Transformer:
the 2nd strategy is the preferred one to start with.
- CVS
The PRISM CVS server is ready and Damien should put the sources of the
final PRISM coupler next week.
Reiner recommended to put the line
character (len=80), save :: my_string = '
'
in all routines. Rene and Damien agreed to do so.
- Presentation of Driver's routines
A gradual merge of the Driver's routines written by Damien, and the
temporary routines written by CCRLE in order to start their PSMILe
development is currently underway.
A first implementation of all the initial process management part is done.
A first implementation of initial exchange of information with the
Transformer is done.
- Presentation of Transformer routines
A first implementation of initial exchange of information with the
Driver is done. A first implementation of initial exchange of
information with the PSMILe is done. Initial exchange of information
on the transformations to do on the coupling fields is missing. The
exchange of information on the interpolation weights should take place in the initial phase (and not with each
prism_put).
- Presentation of PSMILe routines
The following aspects have been developed: structures definitions,
structures initialisation through the PSMILe interface calls,
calculation of PSMILe ids, set up of main communicators.
- PSMILe interface review
The following points were discussed. Rene should revised the document
accordingly.
- Start-up phase:
The order of launching of the different applications is
needed in the SCC.
- PRISM include file:
Integer named parameters will be used (not character strings).
- prism_get_local_comm:
Retrieval of the application communicator using a pre-defined appl_id OK.
- prism_enddef:
Should be moved to a new section (before the Exchange of Transient variables).
- Grid evolving with time:
If a grid localisation evolves with time, the developer will be
allowed to re-call prism_set_corners (and/or prism_set scalefactors,
prism_set_offset, prism_set_points, prism_set_subgrid,
prism_set_masks, prism_set_angle) after the prism_enddef. A comparison
of the data arrays will automatically be performed by the PSMILe that
will detect which part of the data information has changed and that
will therefore be able to do appropriate actions concerning the
modified part.
- prism_def_grid:
The developer will not be allowed to call a prism_def_grid without
defining corners and points in the definition phase. If two models
share the same grid, they will have to read the same grid in a file
initially.
- prism_def_var: It was decided to add an additional argument to the
prism_def_var primitive defining the scattering/gathering operation
and the convention used. The scattering/gathering will then be
automatically performed below the PSMILe using the associated mask
and the convention. In a first step, only the "NO_SCATTER" will be
supported.
- Grid vertical dimension: the way to transfer the vertical grid
definition to the PSMILe needs further interaction with the model
people.
- A logical indicating if the call is done for the first time or not is
needed for prism_set_points, prism_set_subgrid, prism_set_mask. (It
was confirmed that more than one mask may be associated to one set of
points).
- prism_set_corners: an additional argument describing the way the
corners are connected is needed as the corners are expressed in 3D.
- prism_set_offset: the API for this routine needs to be re-discussed
(Reiner and Sophie)
- The first 3 arguments of prism_set_points, prism_set_subgrids,
prism_set_vectors should be coherent (e.g. point_id, point_name,
grid_id).
- prism_set_subgrid: subgrid_name should be added; subgrid_type not needed.
- mask_array should be a logical.
- prism_put_restart: should be added to allow the developer to
explicitly write a restart coupling file.
- prism_terminate: no good reason could be found to split
prism_terminate into a prism_terminate and a prism_finalize containing
the MPI_Finalize only. Sophie should discuss this again with Stephanie.
- An additional routine prism_set_vectormask, associating different
masks (one for each component), will be added.
- F90 and C API for PSMILe will be developed.
- prism_def_grid:
The developer will not be allowed to call a prism_def_grid without
defining corners and points in the definition phase. If two models
share the same grid, they will have to read the same grid in a file
initially. A PRISM standard for the grid definition in the code could
be developed. FMS standard could be used. This point will be addressed
in the context of the physical interfaces definition as it is also
requested in order to have a coherent land-sea mask in the ocean model and in
the land model.
- The need to transfer non-gridded data was also clearly expressed
during the workshop. In that case, each process would have to
define its partition in the global index space (prism_set_offset) so
that the re-partitioning can take place. This point needs further
discussion.
- During the workshop, users expressed the need of an additional
primitive
prism_put_inquire(var_id, date, date_bounds, ierror)
that
would be used by the model to inquire if a corresponding prism_put
with same var_id, date, and date_bounds would be effectively activated
(exchange, I/O, or local transformation). This would be useful when
the calculation of the data array to be transferred with the
prism_put is CPU expensive and should be avoided if not needed.
- The error code of the prism_put and prism_get should
indicate the type of action that was performed by the PSMILe below the
call.
- It was argued during the workshop that there should be no
redundancy of information in the SMIOC/SCC and in the PSMILe
interface. The arguments of prism_def_var should be re-examined.
The argument ``var_type'' is certainly redundant; ``var_nodims'' is probably
also redundant; finally, ``method_id'' could be suppressed as the related
point_name (or vector_name or subgrid_name) should be identified in
the SMIOC/SCC. For now, it is proposed to keep all these arguments,
and possibly make the list to evolve in the future when the PSMILe
will effectively work with SMIOC/SCC files. This proposition needs to
be re-discussed.
- Balaji noted that PSMILe is not dynamic in terms of number
of bundles or subgrids (ESMF will be).
- RCM-GCM coupling (cf discussion with Ralf Döscher):
To perform efficient GCM-RCM coupling, extraction of 3D subspaces
(in the RCM boundary zone) and 3D interpolation (at least multiple 2D
+ vertical) are required. These functionalities are not included in
Oasis 3.0 and it is not sure presently what will
be implemented in the prototype version of the ``final'' PRISM
coupler due 12/2003. This needs to be clarified. As back-up, the
following runs could be defined with PSMILe V.0 for the demonstration runs :
- Use prism_put_proto in GCM to write the 3D fields
into a file (OUTPUT status in the namcouple) and
prism_get_proto in RCM to read the
3D field (INPUT status in the namcouple); the RCM performs the
multiple 2D interpolations and the vertical interpolation.
- Same as above with direct communication (IGNORED status in the
namcouple) instead of writing/reading to/in file.
- Atmosphere-atmospheric coupling (cf discussion with Yiwen
Xu):
To perform efficient AGCM-CTM (Chemistry Transport Model)
coupling, multiple 2D interpolations of 3D fields are required (the
vertical levels of the CTM can be adjusted to the AGCM ones). It is
not sure presently what will be implemented in the prototype version
of the ``final'' PRISM coupler due 12/2003. This needs to be
clarified. As back-up, the following runs could be defined as
demonstration runs with Oasis 3.0 and PSMILe V.0:
- Multiple prism_put_proto (one for each level), multiple 2D interpolation
in Oasis, multiple prism_get_proto (one for each level); this will be
possible if the duplications of grids is removed in Oasis (this
is planned and evaluated to 3 days of work).
- Direct communication of the whole 3D fields AGCM to CTM
and vice-versa (mono-process
source and target models, or mono-process source model to
parallel target model, or parallel source model to mono-process target
model, or parallel source and target models
having the same grid and same partitioning ), 3D interpolation in
source or target model.
Note: AGCM and CTM are naturally sequential; this means that one
model will be waiting while the other one is running. On
platforms for which an efficient swapping functionality does not
exist, this could lead to a waste of resources.