<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 26, 2016 at 12:25 AM, George Bosilca <span dir="ltr"><<a href="mailto:bosilca@icl.utk.edu" target="_blank">bosilca@icl.utk.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">JeffH,<div><br></div><div>You might have been misled by the date on the ticket. This is an old (stale ticket) that was imported from TRAC when we moved to github. Thanks for pointing it out.</div><div><br></div></div></blockquote><div><br></div><div>So the current release of Open-MPI supports MPI_THREAD_MULTIPLE for all BTL/MTL without exception? That is fantastic, because it means that the application community can finally move forward with interesting MPI+X designs.</div><div><br></div><div>Please close that Github ticket now, because I frequently cite it as evidence that the MPI community does not take threads seriously and users unfortunately must rely upon MPICH-based implementations rather than implementations of the MPI standard in order to use MPI_THREAD_MULTIPLE.</div><div><br></div><div>Jeff</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div></div><div>That being said, your point remains valid, we should not force the users to use complicated solution based on threads doing blocking MPI calls just to cope with our inability to timely converge toward a reasonable non-blocking solution. </div><span class="HOEnZb"><font color="#888888"><div><br></div><div> George.</div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 26, 2016 at 12:55 AM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Thu, Feb 25, 2016 at 3:40 PM, Balaji, Pavan <span dir="ltr"><<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span><br>
> On Feb 25, 2016, at 2:48 PM, Jeff Squyres (jsquyres) <<a href="mailto:jsquyres@cisco.com" target="_blank">jsquyres@cisco.com</a>> wrote:<br>
>> 1) Low overhead (justifies Isend/Irecv etc.)<br>
>> 2) Scarcity of threads (e.g., the BlueGene/L rationale)<br>
><br>
> Agreed -- neither of these are likely important for an iconnect/iaccept scenario.<br>
<br>
</span>I would disagree. This is *always* a problem since adding threads hurts other operations in the application. For example, if I need to use a nonblocking iconnect/iaccept in one small part of the application, it now means that every fine-grained PUT/GET/ISEND operation in the rest of the application would be more expensive.<br>
<span><br></span></blockquote><div><br></div></span><div>Does making a blocking operation nonblocking via threads not require MPI_THREAD_MULTIPLE, which Open-MPI does not support (<a href="https://github.com/open-mpi/ompi/issues/157" target="_blank">https://github.com/open-mpi/ompi/issues/157</a>) more than 12 years after its standardization? It seems that a significant fraction of the MPI Forum does not believe that thread-safe MPI calls are important, so how can anyone argue that threads are a solution to this problem?</div><div><br></div><div>Jeff</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>
> But I do think the progression overlap with application threads can be quite useful.<br>
<br>
</span>Right. Having a nonblocking operation is not about performance improvements, but that I can now stick in a request into an existing Waitall or Testany in my application. FWIW, at least one of our applications uses NBC I/O in exactly this way. Before MPI-3.1, they had to do an event-based model (with Testany) for everything else and a blocking call for I/O, which was inconvenient and hurts performance.<br>
<span><br>
>> There are some interactions with multiple-competion routines and limitations in the generalized requests, but fixing generalized requests would be a more general solution.<br>
><br>
> Agreed -- fixing generalized requests has been a white whale for quite a while now.<br>
<br>
</span>There are technical reasons for why this was not easily fixed, unlike iconnect/iaccept where people are bandwidth limited to put together a proposal.<br>
<span><font color="#888888"><br>
-- Pavan<br>
</font></span><div><div><br>
_______________________________________________<br>
mpi-forum mailing list<br>
<a href="mailto:mpi-forum@lists.mpi-forum.org" target="_blank">mpi-forum@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br>
</div></div></blockquote></span></div><span><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div>Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</font></span></div></div>
<br>_______________________________________________<br>
mpi-forum mailing list<br>
<a href="mailto:mpi-forum@lists.mpi-forum.org" target="_blank">mpi-forum@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
mpi-forum mailing list<br>
<a href="mailto:mpi-forum@lists.mpi-forum.org">mpi-forum@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div>