<div dir="ltr">I read the slides and accepted your arguments that there was something to optimize here in some cases, but I wasn't convinced there were that many cases and how much difference it would make to them.<div>
<br>Pavan has suggested some usages but it isn't clear which parts of your proposal actually affect them. I think this proposed feature could help MADNESS a lot, but I refuse to use MADNESS as a litmus test for MPI Forum activities since I did that for the C++ bindings and it amounted to a huge waste of time.</div>
<div><br></div><div>It seems to me that the better way to solve this problem is to support active messages and let the remote handler deal with memory allocation in the most appropriate manner.</div><div><br></div><div>Jeff</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Aug 26, 2013 at 1:23 PM, David Goodell (dgoodell) <span dir="ltr"><<a href="mailto:dgoodell@cisco.com" target="_blank">dgoodell@cisco.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">JeffH, please read the slides for motivation and performance opportunities, esp. the big red bullets on #2, #6, and #7.<br>
<br>
-Dave<br>
<div class="HOEnZb"><div class="h5"><br>
On Aug 26, 2013, at 1:05 PM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br>
<br>
> When does this actually improve performance? If memory is a limiting factor, it merely eliminates a function call. If memory is limiting, I don't see how doing extra ping-pong to keep from hitting OOM errors is noticeable.<br>
><br>
> JeffS and Dave: What's the use case? So far, I fail to see how this is more than icing.<br>
><br>
> Jeff<br>
><br>
><br>
> On Mon, Aug 26, 2013 at 12:55 PM, Sur, Sayantan <<a href="mailto:sayantan.sur@intel.com">sayantan.sur@intel.com</a>> wrote:<br>
> This is a good feature to have in general that improves performance, although the devil is in the details.<br>
><br>
> What is the app trying to do when it finds itself in the quandary of having to accept any sized message (probably from any source as well?). It appears to me that the kind of app that is looking for this feature, or could benefit from this feature is really looking for a many-to-one (or one-to-one, many-to-many) reliably connected stream. It is likely that the app that benefits significantly from this feature will be repeatedly using this call to fetch differing amounts of data each time.<br>
><br>
> A few meetings ago, Bill mentioned that it would be good to think of the standard as a whole and not get bogged down in narrow use-cases. He proposed that a streaming model be added to MPI P2P communications. Can this use-case be included in that model?<br>
><br>
> Thanks,<br>
> Sayantan<br>
><br>
><br>
> > -----Original Message-----<br>
> > From: mpi-forum [mailto:<a href="mailto:mpi-forum-bounces@lists.mpi-forum.org">mpi-forum-bounces@lists.mpi-forum.org</a>] On<br>
> > Behalf Of Jeff Squyres (jsquyres)<br>
> > Sent: Monday, August 26, 2013 8:11 AM<br>
> > To: MPI Forum list<br>
> > Subject: [Mpi-forum] MPI "Allocate receive" proposal<br>
> ><br>
> > Dave Goodell and I have a proposal that we've socialized a bit around with<br>
> > other Forum members, and would now like larger Forum feedback. I'll be<br>
> > presenting the attached slides on the concept of an "allocate receive" in<br>
> > Madrid (3:30-4pm on Thursday).<br>
> ><br>
> > There's no text or ticket yet; this is an idea that we want to get feedback on<br>
> > before working up a full proposal.<br>
> ><br>
> > --<br>
> > Jeff Squyres<br>
> > <a href="mailto:jsquyres@cisco.com">jsquyres@cisco.com</a><br>
> > For corporate legal information go to:<br>
> > <a href="http://www.cisco.com/web/about/doing_business/legal/cri/" target="_blank">http://www.cisco.com/web/about/doing_business/legal/cri/</a><br>
> _______________________________________________<br>
> mpi-forum mailing list<br>
> <a href="mailto:mpi-forum@lists.mpi-forum.org">mpi-forum@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br>
><br>
><br>
><br>
> --<br>
> Jeff Hammond<br>
> <a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
> _______________________________________________<br>
> mpi-forum mailing list<br>
> <a href="mailto:mpi-forum@lists.mpi-forum.org">mpi-forum@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br>
<br>
_______________________________________________<br>
mpi-forum mailing list<br>
<a href="mailto:mpi-forum@lists.mpi-forum.org">mpi-forum@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>
</div>