<div dir="ltr">I think the argument against was that can be hard to get fine-grain, per-operation remote completion, and harder still to do it efficiently. So, we could end up with an interface that builds a false expectation where users expect overlap that many implementations can't provide.<div><br></div><div> ~Jim.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Nov 6, 2014 at 2:38 AM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Barrett didn't have a good argument against these functions except argument count, as I recall. There was some debate as to whether nonblocking flush was better, albeit in an apples-to-oranges way.<br>
<br>
I think most people are just frustrated with the grossness of the corner we are painted into with RMA API. Pirate RMA is the natural consequence of past decisions. The other solution to the application's problem is overlapping windows, which would be only mildly awful if not for our inability to support just one memory model.<br>
<span class="im HOEnZb"><br>
Jeff<br>
<br>
Sent from my iPhone<br>
<br>
</span><div class="HOEnZb"><div class="h5">> On Nov 6, 2014, at 6:47 AM, "Underwood, Keith D" <<a href="mailto:keith.d.underwood@intel.com">keith.d.underwood@intel.com</a>> wrote:<br>
><br>
> So, what, you're telling me I have to start attending again to give you an appropriate amount of difficulty for some of these proposals?<br>
><br>
>> -----Original Message-----<br>
>> From: mpiwg-rma [mailto:<a href="mailto:mpiwg-rma-bounces@lists.mpi-forum.org">mpiwg-rma-bounces@lists.mpi-forum.org</a>] On<br>
>> Behalf Of Jeff Hammond<br>
>> Sent: Tuesday, November 04, 2014 12:39 PM<br>
>> To: MPI WG Remote Memory Access working group<br>
>> Subject: Re: [mpiwg-rma] RMA WG discussion 12/2014<br>
>><br>
>> Pirate RMA (remote request completion - cannot remember ticket number)<br>
>> needs to be retired if WG is still not in favor. But then again, Barrett is gone :-<br>
>> )<br>
>><br>
>> Nonblocking RMA epochs in your SC14 paper should be discussed. That looks<br>
>> promising. Can you create a ticket for it?<br>
>><br>
>> Jeff<br>
>><br>
>> Sent from my iPhone<br>
>><br>
>>> On Nov 4, 2014, at 5:48 PM, William Gropp <<a href="mailto:wgropp@illinois.edu">wgropp@illinois.edu</a>> wrote:<br>
>>><br>
>>> The wiki page already has 397 and I added 460. Note also that there is a list<br>
>> of open tickets on that page; we should try to either adopt or retire the open<br>
>> ones.<br>
>>><br>
>>> Bill<br>
>>><br>
>>>> On Nov 4, 2014, at 10:32 AM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>><br>
>> wrote:<br>
>>>><br>
>>>> I would like to discuss<br>
>>>> <a href="https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/397" target="_blank">https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/397</a> and the<br>
>>>> closely related<br>
>>>> <a href="https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/460" target="_blank">https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/460</a> in San Jose.<br>
>>>><br>
>>>> Can we collect the other tickets of interest to people and ask Martin<br>
>>>> for the appropriate allocation of time?<br>
>>>><br>
>>>> Thanks,<br>
>>>><br>
>>>> Jeff<br>
>>>><br>
>>>> --<br>
>>>> Jeff Hammond<br>
>>>> <a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
>>>> <a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a><br>
>>>> _______________________________________________<br>
>>>> mpiwg-rma mailing list<br>
>>>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>>>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>>><br>
>>> _______________________________________________<br>
>>> mpiwg-rma mailing list<br>
>>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>> _______________________________________________<br>
>> mpiwg-rma mailing list<br>
>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
> _______________________________________________<br>
> mpiwg-rma mailing list<br>
> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
_______________________________________________<br>
mpiwg-rma mailing list<br>
<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
</div></div></blockquote></div><br></div>