<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 27, 2017 at 3:10 PM, Dan Holmes <span dir="ltr"><<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Hi Jeff,<div><br></div><div>I wrote some notes about our discussion.</div><div><a href="https://github.com/mpiwg-p2p/p2p-issues/wiki/notes-2017-03-27" target="_blank">https://github.com/mpiwg-p2p/<wbr>p2p-issues/wiki/notes-2017-03-<wbr>27</a></div><div><br></div></div></blockquote><div><br></div><div>Indeed, as we discussed in San Jose last year, I don't think we need to make all 36 A-F pairs.</div><div><br></div><div>Fsend doesn't block on any thing, so it is effectively Ifsend(req=REQUEST_NULL). Synchronous freeing send (Fssend or Sfsend?) is valid and arguably useful for all of the reasons that Issend is.</div><div><br></div><div>Ready freeing send (same naming quandry) is valid but since I've heard lots of criticism of ready send before, I won't try to defend it in this context.</div><div><br></div><div>Arecv and Iarecv both make sense. And they take "void**" (identical to void* in C, of course) just like MPI_Alloc_mem (and not like MPI_Recv).</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div></div><div>In short, the memory must be returned to the original owner for de-allocation.</div></div></blockquote><div><br></div><div>MPI owns the memory. No stack or user heap allocators allowed. Nothing else is the slightest bit practical.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>We mentioned an attach/detach method for using user memory.</div></div></blockquote><div><br></div><div>There is no value in this. MPI can't turn arbitrary user memory into shared in general so defeats the primary purpose.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>We also discussed using Fsend/Arecv to/from MPI_COMM_NULL to transfer ownership without needing another message.</div></div></blockquote><div><br></div><div>This makes no sense. Please elaborate.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>This needs careful thought - hence the suggestion for sequence diagrams.</div><div><br></div></div></blockquote><div><br></div><div>I drew most of them at some point. Easy to do again.</div><div><br></div><div>Jeff</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div></div><div>Cheers,</div><div>Dan.</div><div><div class="h5"><div><br><div><blockquote type="cite"><div>On 27 Mar 2017, at 22:42, Jeff Hammond <<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>> wrote:</div><br class="m_5600796597159218194Apple-interchange-newline"><div><div dir="auto"><div>Actually, it has to be required in the literal sense.</div><div><br></div><div>Fsend frees memory. Has to know which allocator used. MPI_Alloc_mem is the only one MPI knows about. </div><div><br></div><div>Same in reverse for Arecv and MPI_Free_mem.</div><div><br></div><div>How else can you do it? We could register callbacks for memory allocator but that will almost certainly prevent every useful optimization. </div><div><br></div><div>Jeff<br><br>Sent from my iPhone</div><div><br>On Mar 27, 2017, at 8:57 AM, Dan Holmes <<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>> wrote:<br><br></div><blockquote type="cite"><div>“required” is possibly too strong. “advisable” would be closer to my expectation. “needed in order for MPI to enable all possible optimisations” is more wordy/precise.<div><br></div><div>Cheers,</div><div>Dan.<br><div><br><div><blockquote type="cite"><div>On 27 Mar 2017, at 16:55, Jeff Hammond <<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>> wrote:</div><br class="m_5600796597159218194Apple-interchange-newline"><div><div dir="auto"><div>No. MPI_Alloc_mem and MPI_Free_mem were going to be required. At least that was my plan. <br><br>Sent from my iPhone</div><div><br>On Mar 27, 2017, at 8:51 AM, Jim Dinan <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr">Did we previously look at using MPI_Buffer_attach as a way to support allocate-and-recv?</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 27, 2017 at 11:38 AM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I'm on vacation but I endorse other people doing stuff with Fsend-Arecv, since the slacker who owned it for the past two years hasn't made any progress. I think I copied all the relevant content to GitHub already.<br>
<br>
Jeff<br>
<br>
Sent from my iPhone<br>
<div class="m_5600796597159218194HOEnZb"><div class="m_5600796597159218194h5"><br>
> On Mar 27, 2017, at 8:16 AM, Dan Holmes <<a href="mailto:d.holmes@epcc.ed.ac.uk" target="_blank">d.holmes@epcc.ed.ac.uk</a>> wrote:<br>
><br>
> Hi Jim, et al,<br>
><br>
> I was hoping to move the WG on to talking about Freeing-Send and Allocating-Receive (Fsend & Arecv).<br>
><br>
> I’d like to re-boot and refresh caches on that with a goal of presenting something (probably informally) next face-to-face meeting in June.<br>
><br>
> I’ll start the call and see how many turn up.<br>
><br>
> Cheers,<br>
> Dan.<br>
><br>
>> On 27 Mar 2017, at 15:28, Jim Dinan <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>> wrote:<br>
>><br>
>> Hi All,<br>
>><br>
>> The info query proposal seems to be converging. I don't think there's anything to discuss on this topic this week.<br>
>><br>
>> Current status is that Pavan will check with vendors to make sure they are ok with updating MPICH to report the user's info key value instead of the effective value being used by the implementation. Assuming this is ok, it sounds like the proposal that was read at the last meeting is ready to move forward into voting.<br>
>><br>
>> Are there any other topics for discussion? If not, I think we can cancel today's meeting.<br>
>><br>
>> Cheers,<br>
>> ~Jim.<br>
>> ______________________________<wbr>_________________<br>
>> mpiwg-p2p mailing list<br>
>> <a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a><br>
>> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/ma<wbr>ilman/listinfo/mpiwg-p2p</a><br>
><br>
><br>
> --<br>
> The University of Edinburgh is a charitable body, registered in<br>
> Scotland, with registration number SC005336.<br>
><br>
> ______________________________<wbr>_________________<br>
> mpiwg-p2p mailing list<br>
> <a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a><br>
> <a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/ma<wbr>ilman/listinfo/mpiwg-p2p</a><br>
______________________________<wbr>_________________<br>
mpiwg-p2p mailing list<br>
<a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/ma<wbr>ilman/listinfo/mpiwg-p2p</a></div></div></blockquote></div><br></div>
</div></blockquote><blockquote type="cite"><div><span>______________________________<wbr>_________________</span><br><span>mpiwg-p2p mailing list</span><br><span><a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a></span><br><span><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a></span></div></blockquote></div>______________________________<wbr>_________________<br>mpiwg-p2p mailing list<br><a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a></div></blockquote></div><br></div></div></div></blockquote><blockquote type="cite"><div><span>The University of Edinburgh is a charitable body, registered in</span><br><span>Scotland, with registration number SC005336.</span><br></div></blockquote><blockquote type="cite"><div><span>______________________________<wbr>_________________</span><br><span>mpiwg-p2p mailing list</span><br><span><a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a></span><br><span><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a></span></div></blockquote></div>______________________________<wbr>_________________<br>mpiwg-p2p mailing list<br><a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a></div></blockquote></div><br></div></div></div></div><br>The University of Edinburgh is a charitable body, registered in<br>
Scotland, with registration number SC005336.<br>
<br>______________________________<wbr>_________________<br>
mpiwg-p2p mailing list<br>
<a href="mailto:mpiwg-p2p@lists.mpi-forum.org">mpiwg-p2p@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div>