[Mpi-forum] another request for iaccept
Jeff Hammond
jeff.science at gmail.com
Fri Feb 26 00:41:11 CST 2016
On Fri, Feb 26, 2016 at 12:25 AM, George Bosilca <bosilca at icl.utk.edu>
wrote:
> JeffH,
>
> You might have been misled by the date on the ticket. This is an old
> (stale ticket) that was imported from TRAC when we moved to github. Thanks
> for pointing it out.
>
>
So the current release of Open-MPI supports MPI_THREAD_MULTIPLE for all
BTL/MTL without exception? That is fantastic, because it means that the
application community can finally move forward with interesting MPI+X
designs.
Please close that Github ticket now, because I frequently cite it as
evidence that the MPI community does not take threads seriously and users
unfortunately must rely upon MPICH-based implementations rather than
implementations of the MPI standard in order to use MPI_THREAD_MULTIPLE.
Jeff
> That being said, your point remains valid, we should not force the users
> to use complicated solution based on threads doing blocking MPI calls just
> to cope with our inability to timely converge toward a reasonable
> non-blocking solution.
>
> George.
>
>
> On Fri, Feb 26, 2016 at 12:55 AM, Jeff Hammond <jeff.science at gmail.com>
> wrote:
>
>>
>>
>> On Thu, Feb 25, 2016 at 3:40 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>>
>>>
>>> > On Feb 25, 2016, at 2:48 PM, Jeff Squyres (jsquyres) <
>>> jsquyres at cisco.com> wrote:
>>> >> 1) Low overhead (justifies Isend/Irecv etc.)
>>> >> 2) Scarcity of threads (e.g., the BlueGene/L rationale)
>>> >
>>> > Agreed -- neither of these are likely important for an
>>> iconnect/iaccept scenario.
>>>
>>> I would disagree. This is *always* a problem since adding threads hurts
>>> other operations in the application. For example, if I need to use a
>>> nonblocking iconnect/iaccept in one small part of the application, it now
>>> means that every fine-grained PUT/GET/ISEND operation in the rest of the
>>> application would be more expensive.
>>>
>>>
>> Does making a blocking operation nonblocking via threads not require
>> MPI_THREAD_MULTIPLE, which Open-MPI does not support (
>> https://github.com/open-mpi/ompi/issues/157) more than 12 years after
>> its standardization? It seems that a significant fraction of the MPI Forum
>> does not believe that thread-safe MPI calls are important, so how can
>> anyone argue that threads are a solution to this problem?
>>
>> Jeff
>>
>>
>>> > But I do think the progression overlap with application threads can be
>>> quite useful.
>>>
>>> Right. Having a nonblocking operation is not about performance
>>> improvements, but that I can now stick in a request into an existing
>>> Waitall or Testany in my application. FWIW, at least one of our
>>> applications uses NBC I/O in exactly this way. Before MPI-3.1, they had to
>>> do an event-based model (with Testany) for everything else and a blocking
>>> call for I/O, which was inconvenient and hurts performance.
>>>
>>> >> There are some interactions with multiple-competion routines and
>>> limitations in the generalized requests, but fixing generalized requests
>>> would be a more general solution.
>>> >
>>> > Agreed -- fixing generalized requests has been a white whale for quite
>>> a while now.
>>>
>>> There are technical reasons for why this was not easily fixed, unlike
>>> iconnect/iaccept where people are bandwidth limited to put together a
>>> proposal.
>>>
>>> -- Pavan
>>>
>>> _______________________________________________
>>> mpi-forum mailing list
>>> mpi-forum at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>>
>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
>>
>> _______________________________________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>
>
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20160226/e2836b69/attachment-0001.html>
More information about the mpi-forum
mailing list