[Mpi-forum] MPI_Win_lock_all() ordering question

Jim Dinan james.dinan at gmail.com
Wed Jul 17 11:26:05 CDT 2013


Yes and no.  That ticket is calling out a potential starvation issue,
whereas the issue you raised is deadlock.  There is no way to solve the
deadlock issue through prioritization, since there could be an unbounded
delay between your two calls to MPI_Win_lock and you can't prioritize a
request you haven't received.

The deadlock issue could be resolved by requiring that all locks in
MPI_Win_lock_all are obtained atomically, but this would be an unreasonable
restriction on implementations.  Torsten Hoefler published a paper that
uses this approach by creating a single global lock, but this has a
negative impact on conventional lock/unlock.  The implementation I wrote
for MPICH requests the shared lock at each process lazily, as needed, and
you can certainly deadlock it as you described.

 ~Jim.


On Wed, Jul 17, 2013 at 12:04 PM, Michael Raymond <mraymond at sgi.com> wrote:

>   Thanks. Searching the tickets just now, I found #363. Are my issue and
> that ticket related?
>
> https://svn.mpi-forum.org/**trac/mpi-forum-web/ticket/363<https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/363>
>
>
> On 07/17/2013 10:54 AM, Jim Dinan wrote:
>
>> Hi Michael,
>>
>> This believe that this can deadlock.  I agree that it's unfortunate that
>> we didn't call it out in the spec, given that it would help to clarify
>> the intended semantics.
>>
>>   ~Jim.
>>
>>
>> On Wed, Jul 17, 2013 at 11:41 AM, Michael Raymond <mraymond at sgi.com
>> <mailto:mraymond at sgi.com>> wrote:
>>
>>        I've got a question about the use of MPI_Win_lock_all() in the
>>     face of competing MPI_Win_lock(EXCLUSIVE) calls. Consider the
>>     following calls sequence:
>>
>>
>>     0                                                       1
>>     MPI_Win_lock(EXCLUSIVE, 0)
>>
>>     MPI_Win_lock(EXCLUSIVE, 1)              MPI_Win_lock_all()
>>
>>     ....                                                    ....
>>     MPI_Win_unlock(1)                               MPI_Win_unlock_all()
>>     MPI_Win_unlock(0)
>>
>>        In this situation 0 has an exclusive lock on itself.
>>     Simultaneously, 0 tries to get 1 exclusively and 1 tries to get a
>>     shared lock on everyone. If 0 gets lucky, it will get to go first
>>     and everything will go fine. If OTOH 1 locks itself shared and then
>>     tries to lock 0 shared, deadlock ensues. You could argue that there
>>     should be some global queue / governor that lets 0 go first, but
>>     then I could see running into scalability problems.
>>
>>        I can't find any place in the standard that says if the user
>>     shouldn't do this, or if deadlock is allowed, or if the MPI
>>     implementation should figure things out on its own. Thoughts?
>>
>>     --
>>     Michael A. Raymond
>>     SGI MPT Team Leader
>>     (651) 683-3434
>>
>>     ______________________________**___________________
>>     mpi-forum mailing list
>>     mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-**forum.org<mpi-forum at lists.mpi-forum.org>
>> >
>>     http://lists.mpi-forum.org/__**mailman/listinfo.cgi/mpi-forum<http://lists.mpi-forum.org/__mailman/listinfo.cgi/mpi-forum>
>>     <http://lists.mpi-forum.org/**mailman/listinfo.cgi/mpi-forum<http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum>
>> **>
>>
>>
>>
>>
>>
>> ______________________________**_________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org
>> http://lists.mpi-forum.org/**mailman/listinfo.cgi/mpi-forum<http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum>
>>
>>
> --
> Michael A. Raymond
> SGI MPT Team Leader
> (651) 683-3434
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20130717/1e150de7/attachment-0001.html>


More information about the mpi-forum mailing list