[Mpi-forum] MPI "Allocate receive" proposal

Jeff Hammond jeff.science at gmail.com
Mon Aug 26 14:33:52 CDT 2013


I read the slides and accepted your arguments that there was something to
optimize here in some cases, but I wasn't convinced there were that many
cases and how much difference it would make to them.

Pavan has suggested some usages but it isn't clear which parts of your
proposal actually affect them.  I think this proposed feature could help
MADNESS a lot, but I refuse to use MADNESS as a litmus test for MPI Forum
activities since I did that for the C++ bindings and it amounted to a huge
waste of time.

It seems to me that the better way to solve this problem is to support
active messages and let the remote handler deal with memory allocation in
the most appropriate manner.

Jeff


On Mon, Aug 26, 2013 at 1:23 PM, David Goodell (dgoodell) <
dgoodell at cisco.com> wrote:

> JeffH, please read the slides for motivation and performance
> opportunities, esp. the big red bullets on #2, #6, and #7.
>
> -Dave
>
> On Aug 26, 2013, at 1:05 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>
> > When does this actually improve performance?  If memory is a limiting
> factor, it merely eliminates a function call.  If memory is limiting, I
> don't see how doing extra ping-pong to keep from hitting OOM errors is
> noticeable.
> >
> > JeffS and Dave: What's the use case?  So far, I fail to see how this is
> more than icing.
> >
> > Jeff
> >
> >
> > On Mon, Aug 26, 2013 at 12:55 PM, Sur, Sayantan <sayantan.sur at intel.com>
> wrote:
> > This is a good feature to have in general that improves performance,
> although the devil is in the details.
> >
> > What is the app trying to do when it finds itself in the quandary of
> having to accept any sized message (probably from any source as well?). It
> appears to me that the kind of app that is looking for this feature, or
> could benefit from this feature is really looking for a many-to-one (or
> one-to-one, many-to-many) reliably connected stream. It is likely that the
> app that benefits significantly from this feature will be repeatedly using
> this call to fetch differing amounts of data each time.
> >
> > A few meetings ago, Bill mentioned that it would be good to think of the
> standard as a whole and not get bogged down in narrow use-cases. He
> proposed that a streaming model be added to MPI P2P communications. Can
> this use-case be included in that model?
> >
> > Thanks,
> > Sayantan
> >
> >
> > > -----Original Message-----
> > > From: mpi-forum [mailto:mpi-forum-bounces at lists.mpi-forum.org] On
> > > Behalf Of Jeff Squyres (jsquyres)
> > > Sent: Monday, August 26, 2013 8:11 AM
> > > To: MPI Forum list
> > > Subject: [Mpi-forum] MPI "Allocate receive" proposal
> > >
> > > Dave Goodell and I have a proposal that we've socialized a bit around
> with
> > > other Forum members, and would now like larger Forum feedback.  I'll be
> > > presenting the attached slides on the concept of an "allocate receive"
> in
> > > Madrid (3:30-4pm on Thursday).
> > >
> > > There's no text or ticket yet; this is an idea that we want to get
> feedback on
> > > before working up a full proposal.
> > >
> > > --
> > > Jeff Squyres
> > > jsquyres at cisco.com
> > > For corporate legal information go to:
> > > http://www.cisco.com/web/about/doing_business/legal/cri/
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> >
> >
> >
> > --
> > Jeff Hammond
> > jeff.science at gmail.com
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>



-- 
Jeff Hammond
jeff.science at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20130826/00bcd4e4/attachment-0001.html>


More information about the mpi-forum mailing list