[Mpi-forum] [EXTERNAL] Re: MPI "Allocate receive" proposal
Barrett, Brian W
bwbarre at sandia.gov
Mon Aug 26 11:06:00 CDT 2013
Jeff -
I think it is different from a standards point of view, and has been
different for some implementations. Take the Portals 3 implementation on
Red Storm; the implementation did not allocate memory internally after
MPI_INIT. So there was never a point at which the implementation would
receive a message and try to allocate memory to land it. The user might
be faced with that decision, but she can also free other user memory to
make space for the incoming message before posting the receive. So I
think it is, semantically, different.
That being said, I don't think resource exhaustion corner cases are a deal
breaker for me. I think some implementation-dependent phrasing might be
acceptable. It might be useful to define the message
transmission/matching semantics for these corner cases. For example, if
MPI_Arecv returns an error because of resource exhaustion, is the message
lost (my preference) or left in the receive queue? If the message is
lost, what happens to the sender if the send was a synchronous send?
Brian
On 8/26/13 9:54 AM, "Jeff Squyres (jsquyres)" <jsquyres at cisco.com> wrote:
>FWIW, I'm not really sure I see how the failure for MPI_ARECV to get a
>buffer (via malloc or whatever mechanism it wants) is substantially
>different than failing to get a buffer for any incoming message.
>
>Indeed, for ARECV, an implementation may well try to use a freelist to
>allocate a buffer, or fully-lazily allocate a buffer (because it won't
>know the size until the message is received). Meaning that the
>allocation will (potentially) occur on the same/similar path as all
>receive messages.
>
>Put that together with the fact that normal receives can also fail to get
>an eager or unexpected buffer when a new message arrives, for example.
>
>Therefore the error handling could well be the same between the ARECV and
>"regular" receive cases.
>
>
>On Aug 26, 2013, at 11:27 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>
>>
>> Can you clarify a bit more?
>>
>> Is that a fatal failure at that point? Or is it more like
>>MPI_Alloc_mem failing where I can retry later or just do a regular
>>malloc (in this case, a regular MPI_RECV).
>>
>> -- Pavan
>>
>> On 08/26/2013 10:19 AM, David Goodell (dgoodell) wrote:
>>> I'd say that's implementation defined behavior, no different than
>>>hitting any other system resource limitation right now.
>>>
>>> -Dave
>>>
>>> On Aug 26, 2013, at 10:14 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>>>
>>>> Jeff,
>>>>
>>>> This should probably be moved to the point-to-point WG, but I'll
>>>>comment on this thread for now.
>>>>
>>>> Such a proposal was brought up at the very early stage of the MPI-2.1
>>>>process (I don't remember by whom). The main concern at that point
>>>>was what happens when the memory allocation fails. Will the receive
>>>>fail? What happens to the sent message?
>>>>
>>>> Can you work these details into your slides?
>>>>
>>>> -- Pavan
>>>>
>>>> On 08/26/2013 10:10 AM, Jeff Squyres (jsquyres) wrote:
>>>>> Dave Goodell and I have a proposal that we've socialized a bit
>>>>>around with other Forum members, and would now like larger Forum
>>>>>feedback. I'll be presenting the attached slides on the concept of
>>>>>an "allocate receive" in Madrid (3:30-4pm on Thursday).
>>>>>
>>>>> There's no text or ticket yet; this is an idea that we want to get
>>>>>feedback on before working up a full proposal.
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> mpi-forum mailing list
>>>>> mpi-forum at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>>>>
>>>>
>>>> --
>>>> Pavan Balaji
>>>> http://www.mcs.anl.gov/~balaji
>>>> _______________________________________________
>>>> mpi-forum mailing list
>>>> mpi-forum at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>>
>>> _______________________________________________
>>> mpi-forum mailing list
>>> mpi-forum at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>>>
>>
>> --
>> Pavan Balaji
>> http://www.mcs.anl.gov/~balaji
>> _______________________________________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
>
>--
>Jeff Squyres
>jsquyres at cisco.com
>For corporate legal information go to:
>http://www.cisco.com/web/about/doing_business/legal/cri/
>
>_______________________________________________
>mpi-forum mailing list
>mpi-forum at lists.mpi-forum.org
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
--
Brian W. Barrett
Scalable System Software Group
Sandia National Laboratories
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 454 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20130826/401dc4ca/attachment-0001.bin>
More information about the mpi-forum
mailing list