[Mpi-forum] MPI "Allocate receive" proposal

Pavan Balaji balaji at mcs.anl.gov
Mon Aug 26 13:20:01 CDT 2013


Jeff,

While a lot of details still need to be worked out, the core idea is 
still useful for performance.  Specifically, if the application has to 
do a PROBE and then post a receive, the message is always unexpected and 
there might be a copy.  With an ARECV, eager communication can be 
zero-copy if the MPI implementation is willing to give away its 
temporary buffer to the user (whether it wants to do that is a different 
story; some networks which don't require registration can).

As per the use case, at least ADLB and Charm++ will benefit from this. 
They don't know what message size will come in.  But I'm sure there are 
more.

My concern is not whether someone can use it, but rather with respect to 
how much the MPI implementation can do without the sender knowing if the 
receiver is going to receive the message using RECV or ARECV.  But maybe 
with Iarecv, it'll not be a problem.

  -- Pavan

On 08/26/2013 01:05 PM, Jeff Hammond wrote:
> When does this actually improve performance?  If memory is a limiting
> factor, it merely eliminates a function call.  If memory is limiting, I
> don't see how doing extra ping-pong to keep from hitting OOM errors is
> noticeable.
>
> JeffS and Dave: What's the use case?  So far, I fail to see how this is
> more than icing.
>
> Jeff
>
>
> On Mon, Aug 26, 2013 at 12:55 PM, Sur, Sayantan <sayantan.sur at intel.com
> <mailto:sayantan.sur at intel.com>> wrote:
>
>     This is a good feature to have in general that improves performance,
>     although the devil is in the details.
>
>     What is the app trying to do when it finds itself in the quandary of
>     having to accept any sized message (probably from any source as
>     well?). It appears to me that the kind of app that is looking for
>     this feature, or could benefit from this feature is really looking
>     for a many-to-one (or one-to-one, many-to-many) reliably connected
>     stream. It is likely that the app that benefits significantly from
>     this feature will be repeatedly using this call to fetch differing
>     amounts of data each time.
>
>     A few meetings ago, Bill mentioned that it would be good to think of
>     the standard as a whole and not get bogged down in narrow use-cases.
>     He proposed that a streaming model be added to MPI P2P
>     communications. Can this use-case be included in that model?
>
>     Thanks,
>     Sayantan
>
>
>      > -----Original Message-----
>      > From: mpi-forum [mailto:mpi-forum-bounces at lists.mpi-forum.org
>     <mailto:mpi-forum-bounces at lists.mpi-forum.org>] On
>      > Behalf Of Jeff Squyres (jsquyres)
>      > Sent: Monday, August 26, 2013 8:11 AM
>      > To: MPI Forum list
>      > Subject: [Mpi-forum] MPI "Allocate receive" proposal
>      >
>      > Dave Goodell and I have a proposal that we've socialized a bit
>     around with
>      > other Forum members, and would now like larger Forum feedback.
>       I'll be
>      > presenting the attached slides on the concept of an "allocate
>     receive" in
>      > Madrid (3:30-4pm on Thursday).
>      >
>      > There's no text or ticket yet; this is an idea that we want to
>     get feedback on
>      > before working up a full proposal.
>      >
>      > --
>      > Jeff Squyres
>      > jsquyres at cisco.com <mailto:jsquyres at cisco.com>
>      > For corporate legal information go to:
>      > http://www.cisco.com/web/about/doing_business/legal/cri/
>     _______________________________________________
>     mpi-forum mailing list
>     mpi-forum at lists.mpi-forum.org <mailto:mpi-forum at lists.mpi-forum.org>
>     http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
>
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>

-- 
Pavan Balaji
http://www.mcs.anl.gov/~balaji



More information about the mpi-forum mailing list