[Mpi3-ft] Persistent Communication & Sendrecv

Bronis R. de Supinski bronis at llnl.gov
Tue Aug 31 12:20:17 CDT 2010


Huh? No, the MPI_Recv_init creates a persistent request with 
MPI_ANY_SOURCE and MPI_ANY_TAG for those fields. Combined with
the MPI_Startall, it is roughly the same as this:

  MPI_Irecv (MPI_ANY_SOURCE, MPI_ANY_TAG, count=0, &req[1]);

(I'll admit the "count=0" arg is a bit odd). A wait on the
started request can match any message (again, the count is
a bit strange and Josh omitted the type arg but it is
pseudocode so no big deal).

On Tue, 31 Aug 2010, Joshua Hursey wrote:

>
> On Aug 31, 2010, at 1:09 PM, Fab Tillier wrote:
>
>> Joshua Hursey wrote on Tue, 31 Aug 2010 at 08:43:24
>>
>>> I was thinking more about MPI_Startall() this morning and found a
>>> situation where this technique would not work.
>>>
>>> If the application does:
>>> --------------------
>>> MPI_Send_init(rank=1, tag=123, req[0]);
>>> MPI_Recv_init(MPI_ANY_SOURCE, MPI_ANY_TAG, count=0, req[1]);
>>> MPI_Send_init(rank=2, tag=123, req[2]);
>>>
>>> MPI_Startall(3, req) // Fails with MPI_ERR_IN_STATUS
>>> if( failed ) {
>>>  for(i=0; i<3; ++i) {
>>>    MPI_Request_get_status(req[i], flag, status);
>>>    if( flag && status.error != success ) // Failed
>>>    if( flag && status == empty ) // Not started
>>>    if( flag && status != empty ) // Complete
>>>  }
>>> }
>>> --------------------
>>>
>>> The problem is with the definition of an 'empty' status which has
>>> (section 3.7.3):
>>> ---------------------
>>> source = MPI_ANY_SOURCE
>>> tag = MPI_ANY_TAG
>>> error = MPI_SUCCESS
>>> MPI_Get_count = 0
>>> MPI_test_cancelled = false
>>> ---------------------
>>> So the successful completion of the MPI_Recv_init() call would be
>>> indistinguishable from the 'not started' or inactive state of the call.
>>
>> Wouldn't a successful completion of the MPI_Recv_init() return a specific source and tag for the message actually received?  The source and tag fields of the receive are for filtering incoming sends, but when the receive completes, it was matched to exactly one send, with a specific tag and source.
>>
>> What am I missing?
>
> Ah yes. You are correct. So this is not a problem then, and the MPI_Request_get_status() technique would be a good way to check the state of all of the requests.
>
> Thanks,
> Josh
>
>>
>> -Fab
>>
>> _______________________________________________
>> mpi3-ft mailing list
>> mpi3-ft at lists.mpi-forum.org
>> http://*lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>>
>
> ------------------------------------
> Joshua Hursey
> Postdoctoral Research Associate
> Oak Ridge National Laboratory
> http://*www.*cs.indiana.edu/~jjhursey
>
>
>
>
>
> _______________________________________________
> mpi3-ft mailing list
> mpi3-ft at lists.mpi-forum.org
> http://*lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>
>



More information about the mpiwg-ft mailing list