<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 25, 2016 at 3:40 PM, Balaji, Pavan <span dir="ltr"><<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class=""><br>
> On Feb 25, 2016, at 2:48 PM, Jeff Squyres (jsquyres) <<a href="mailto:jsquyres@cisco.com">jsquyres@cisco.com</a>> wrote:<br>
>> 1) Low overhead (justifies Isend/Irecv etc.)<br>
>> 2) Scarcity of threads (e.g., the BlueGene/L rationale)<br>
><br>
> Agreed -- neither of these are likely important for an iconnect/iaccept scenario.<br>
<br>
</span>I would disagree. This is *always* a problem since adding threads hurts other operations in the application. For example, if I need to use a nonblocking iconnect/iaccept in one small part of the application, it now means that every fine-grained PUT/GET/ISEND operation in the rest of the application would be more expensive.<br>
<span class=""><br></span></blockquote><div><br></div><div>Does making a blocking operation nonblocking via threads not require MPI_THREAD_MULTIPLE, which Open-MPI does not support (<a href="https://github.com/open-mpi/ompi/issues/157">https://github.com/open-mpi/ompi/issues/157</a>) more than 12 years after its standardization? It seems that a significant fraction of the MPI Forum does not believe that thread-safe MPI calls are important, so how can anyone argue that threads are a solution to this problem?</div><div><br></div><div>Jeff</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">
> But I do think the progression overlap with application threads can be quite useful.<br>
<br>
</span>Right. Having a nonblocking operation is not about performance improvements, but that I can now stick in a request into an existing Waitall or Testany in my application. FWIW, at least one of our applications uses NBC I/O in exactly this way. Before MPI-3.1, they had to do an event-based model (with Testany) for everything else and a blocking call for I/O, which was inconvenient and hurts performance.<br>
<span class=""><br>
>> There are some interactions with multiple-competion routines and limitations in the generalized requests, but fixing generalized requests would be a more general solution.<br>
><br>
> Agreed -- fixing generalized requests has been a white whale for quite a while now.<br>
<br>
</span>There are technical reasons for why this was not easily fixed, unlike iconnect/iaccept where people are bandwidth limited to put together a proposal.<br>
<span class=""><font color="#888888"><br>
-- Pavan<br>
</font></span><div class=""><div class="h5"><br>
_______________________________________________<br>
mpi-forum mailing list<br>
<a href="mailto:mpi-forum@lists.mpi-forum.org">mpi-forum@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div>