[Mpi-forum] Question about the semantics of MPI_Comm_disconnect

Rajeev Thakur thakur at mcs.anl.gov
Tue Nov 12 17:38:01 CST 2013


For one-sided you would need to call the corresponding synchronization function or Win_free for passive target operations.

In other words, you have to do whatever you need to do before calling Finalize. It is like calling Finalize on that communicator.


On Nov 12, 2013, at 5:30 PM, "Jeff Squyres (jsquyres)" <jsquyres at cisco.com> wrote:

> Ok -- but how do you apply the phrase "and matched" to pending one-sided communication?  Or a pending comm_idup?  Or ...?
> 
> I guess I'm saying that "and matched" isn't correct here.
> 
> 
> 
> On Nov 12, 2013, at 6:25 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
> 
>> The matched is the sense of the matching mentioned in the paragraph I quoted from MPI_Finalize. (If the process is the target of a send, it must have called the matching receive, if it is part of a group doing a collective, it must have called its collective, etc.)
>> 
>> Rajeev
>> 
>> 
>> On Nov 12, 2013, at 5:10 PM, Jeff Squyres (jsquyres) <jsquyres at cisco.com> wrote:
>> 
>>> Rajeev --
>>> 
>>> Any insight on why it says "...complete *and matched*" (emphasis is mine)?
>>> 
>>>> MPI_COMM_DISCONNECT may be called only if all communication is complete and
>>>> matched
>>> 
>>> The standard defines what matching means for point-to-point communications, but:
>>> 
>>> 1. Does it define how an application is able to tell if a communication *has been matched* by the peer process?
>>> 
>>> 2. What about non-point-to-point communication?  E.g., is there a definition for "match" for collective file IO?
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Nov 12, 2013, at 5:40 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>>> 
>>>> Let's take this sequence
>>>> 
>>>> MPI_Isend
>>>> MPI_Wait
>>>> MPI_Comm_disconnect
>>>> 
>>>> After MPI_Wait returns, it doesn't mean that the data has gone over to the other side. It could be buffered locally. Comm_disconnect will ensure that it gets communicated to the other side. If the Wait wasn't called at all in the above sequence, it would be similar to calling MPI_Finalize without a Wait (i.e., incorrect).
>>>> 
>>>> Think of Comm_disconnect as "whatever connection was there between client and server is gone".
>>>> 
>>>> Rajeev
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Nov 12, 2013, at 4:27 PM, Nathan Hjelm <hjelmn at lanl.gov> wrote:
>>>> 
>>>>> On Tue, Nov 12, 2013 at 04:20:05PM -0600, Rajeev Thakur wrote:
>>>>>> On Nov 12, 2013, at 4:08 PM, Nathan Hjelm <hjelmn at lanl.gov> wrote:
>>>>>> 
>>>>>>> That doesn't match with the wording on p 400 32-34:
>>>>>>> 
>>>>>>> "MPI_COMM_DISCONNECT has the same action as MPI_COMM_FREE, except that it
>>>>>>> waits for pending communication to finish internally and enables the guarantee about the
>>>>>>> behavior of disconnected processes."
>>>>>> 
>>>>>> The above sentence says that MPI_Comm_free does not wait for pending communication to complete, whereas MPI_Comm_disconnect does.  
>>>>> 
>>>>> That makes absolutely no sense if MPI_Wait/MPI_Test cannot be called after MPI_Comm_disconnect. If
>>>>> neither of those functions can be called after MPI_Comm_disconnect then it would be better wording
>>>>> that all communication MUST be complete before the call the MPI_Comm_disconnect without any
>>>>> qualification that MPI_Comm_disconnect with wait until all communication is complete. There should
>>>>> be no communication otherwise we have to allow MPI_Wait/MPI_Test after the call to MPI_Comm_disconnect.
>>>>> You see why this is confusing/bad wording in the standard? As an implementor I can not tell what
>>>>> was intended here.
>>>>> 
>>>>>>> Which suggests that some communication may not be finished when MPI_Comm_disconnect is called. Note
>>>>>>> that is is safe to call MPI_Wait after MPI_Comm_disconnect but not after MPI_Finalize.
>>>>>> 
>>>>>> You cannot call MPI_Wait after MPI_Comm_disconnect. You can call it after MPI_Comm_free.
>>>>> 
>>>>> I don't see that anywhere in the description of MPI_Comm_disconnect. As far as I can tell the
>>>>> code snippet I provided is 100% correct MPI code.
>>>>> 
>>>>> -Nathan
>>>>> _______________________________________________
>>>>> mpi-forum mailing list
>>>>> mpi-forum at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>>> 
>>>> _______________________________________________
>>>> mpi-forum mailing list
>>>> mpi-forum at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>> 
>>> 
>>> -- 
>>> Jeff Squyres
>>> jsquyres at cisco.com
>>> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>>> 
>>> _______________________________________________
>>> mpi-forum mailing list
>>> mpi-forum at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>> 
>> _______________________________________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> 
> 
> -- 
> Jeff Squyres
> jsquyres at cisco.com
> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum




More information about the mpi-forum mailing list