Back to OASIS3 home

Arborescence of the clim library (in oasis3/lib/scrip/src) :

CLIM_Init_Oasis.F : (called by oasis3 in oasis3/src/inicmc)
=> call MPI_INIT(mpi_err) (start global MPI environment for OASIS and models)
=> call MPI_COMM_SIZE(MPI_COMM_WORLD,inumproc,mpi_err)
=> call MPI_COMM_RANK(MPI_COMM_WORLD,imyrank,mpi_err)
If MPI1 duplicate global communicator and create communicators for each model (the correponding routines of the models are in prism_init_comp_proto)
=> call MPI_COMM_DUP(MPI_COMM_WORLD,mpi_comm,mpi_err)
=> call MPI_Allgather(cmynam,CLIM_Clength,MPI_CHARACTER, cunames,CLIM_Clength,MPI_CHARACTER, mpi_comm,mpi_err)
=> call MPI_COMM_SPLIT(MPI_COMM_WORLD, icolor, ikey, kcomm_local, mpi_err) (create communicator of OASIS3)
=> call MPI_Recv(ibuff,1,MPI_INTEGER,jn,itagcol,mpi_comm, mpi_status, mpi_err) (receives the color of each model)
=> call MPI_Send(mynummod,1,MPI_INTEGER,jn,itagcol,mpi_comm, mpi_err) (sends back to each process (involved or not in coupling) its model number "mynummod")
=> call MPI_COMM_SIZE(mpi_comm,il_mpisize,mpi_err)
Else if MPI2
Endif
=> call MPI_Send(knmods, 1, MPI_INTEGER, jn, itagcol, mpi_comm, mpi_err)
=> call MPI_Send(ig_clim_nfield, 1, MPI_INTEGER, jn, itagcol+1, mpi_comm, mpi_err)
=> call MPI_Send(il_clim_maxport, 1, MPI_INTEGER, jn, itagcol+1, mpi_comm, mpi_err)
=> call MPI_Send(rl_work, iposbuf, MPI_PACKED, jn, itagcol+2, mpi_comm, mpi_err)
Evaluate grids_start from existence or not of grids.nc (INQUIRE(FILE = cgrdnam, EXIST = existent)) : grids_start=0 no write,
grids_start=1 write
=> call MPI_Send(grids_start, ilen, itype, ip, itag, mpi_comm, MPI_ERR) (send starting flag to write grids or not (grids_start) to all model processors)
If grids_start=1, Oasis3 sends the names of the different files to create (the corresponding routines of the modesl are in prism_start_grids_writing)
=> call MPI_Send(cgrdnam, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(cmsknam, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(csurnam, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(cglonsuf, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(cglatsuf, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(crnlonsuf, ilen, itype, idproc, itag,mpi_comm, MPI_ERR)
=> call MPI_Send(crnlatsuf, ilen, itype, idproc, itag,mpi_comm, MPI_ERR)
=> call MPI_Send(cmsksuf, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(csursuf, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Send(cangsuf, ilen, itype, idproc, itag, mpi_comm, MPI_ERR)
=> call MPI_Recv(grids_done, ilen, itype, idproc, itag, mpi_comm, mpi_status, MPI_ERR)