Back to OASIS4 home

Routine psmile_get_intersect (ninter, nmyint, nnull, num_intersect_per_grid, num_dummy_intersect_per_grid, lastag, ierror)
Subroutine "PSMILe_get_intersect" receives the messages sent from routine "PSMILe_find_intersect", performs the actions required from other processes and performs the search of donor cells.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
      paction%n      = 0
      paction%ninter = ninter
      paction%lastag = lastag
      paction%nmyint = 0
      paction%grid2receive = .false.
      paction%n_answer2recv_per_grid(:) = num_intersect_per_grid(:) - num_dummy_intersect_per_grid(:)
      new_intersection = .true.
!  n_answer = Number of answers containing requests for grid data received. If no grid data was required, the receiving process doesn't send an answer. This are "nnull" messages.
      paction%n_answer = 0
      paction%n_answer2recv = ninter - nnull
      paction%nloc_recv = 0
      paction%n_selected = 0
      paction%n_fin = 0
      paction%n_fin2recv = - n_act_comp
         do icomp = 1, n_act_comp
         paction%n_fin2recv = paction%n_fin2recv + comp_infos(icomp)%size
         end do
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Examples : (there are the same as the one proposed in psmile_find_intersect)
here

In psmile_get_intersect, some request are posted thanks to MPI_Irecv calls which are non blocking communications :

Nonblocking calls allocate a communication request object and associate it with the request handle (the argument request).
The request can be used later to query the status of the communication or wait for its completion.

A nonblocking receive call indicates that the system may start writing data into the receive buffer.
The receiver should not access any part of the receive buffer after a nonblocking receive operation is called, until the receive completes.

A receive request can be determined being completed by calling the MPI_Wait, MPI_Waitany, MPI_Test, or MPI_Testany with request returned by this function.
more informations : here
MPI_WAITANY(count, array_of_request,index, status, ierror)
Parameters
count
[in] list length (integer)
array_of_requests
[in/out] array of requests (array of handles)
index
[out] index of handle for operation that completed (integer). In the range 0 to count-1. In Fortran, the range is 1 to count.
status
[out] status object (Status). May be MPI_STATUS_IGNORE.

Remarks :Blocks until one of the operations associated with the active requests in the array has completed. If more then one operation is enabled and can terminate, one is arbitrarily chosen. Returns in index the index of that request in the array and returns in status the status of the completing communication. (The array is indexed from zero in C, and from one in Fortran.) If the request was allocated by a nonblocking communication operation, then it is deallocated and the request handle is set to MPI_REQUEST_NULL.

Signification of the different tag :
!  lrequest (1)    for reqtag : Request to send grid data                                                                                         => index=1
!  lrequest (2)    for lastag : Receive data on grid intersection (see psmile_bsend in psmile_find_intersect)     => index=2
!  lrequest (3)    for grdtag : Receive grid data                                                                                                     => index=3
!  lrequest (4)    for exttag : Receive request for extra search                                                                              => index=4
!  lrequest (5)    for seltag : Request to receive info on seleted points of nearest neighbour search                   => index=5
!  lrequest (num_req_types:nreq)   for loctag+: Receive data on locations found

3 MPI_Irecv in psmile_get_intersect : reqtag and exttag are posted before lastag

!===> Set up request for a grid transfer (answer to tag "lastag")
if (paction%n_answer < paction%n_answer2recv) then  ! if number of messages received still lower to total number of messages to receive
call MPI_Irecv (paction%msgreq, nd_msgint, MPI_INTEGER, MPI_ANY_SOURCE, reqtag, comm_psmile, paction%lrequest(1), ierror)

For global search paction%n_fin2recv > 0 as soon as one component is parallel
!===> ... Set up request for receive of an extra search request
paction%n_fin2recv = paction%n_fin2recv + comp_infos(icomp)%size
if (paction%n_fin2recv > 0) then
call MPI_Irecv (paction%msg_extra, nd_msgextra, MPI_INTEGER, MPI_ANY_SOURCE, exttag, comm_psmile, paction%lrequest(4), ierror)

do while ( (paction%n   < paction%ninter)  .or.  (paction%n_answer  < paction%n_answer2recv) .or. &
                    (paction%nloc_recv < paction%n_answer2recv)  .or.  paction%n_selected > 0  .or. paction%grid2receive )
         if ((paction%ninter > 0) .and.  (paction%n < paction%ninter) .and. new_intersection)  then
!===> ... Set up request for receive of an intersection
call MPI_Irecv (paction%msgint, nd_msgint, MPI_INTEGER,   maxval(paction%intersect_ranks), paction%lastag, comm_psmile,  paction%lrequest(2), ierror)

call MPI_Waitany (paction%nreq, paction%lrequest, index, lstatus, ierror) 
! see comments above