[Mpi3-rma] GetAccumulate restriction needed?
jeff.science at gmail.com
Thu Mar 31 21:32:42 CDT 2011
That's sufficient for me. I have never cared enough about IB to know
its fundamental limitations.
Other than "accumulate with replace" for large buffers, I can't think
of an application use case where it would matter to support in-place
On Thu, Mar 31, 2011 at 9:27 PM, Underwood, Keith D
<keith.d.underwood at intel.com> wrote:
> I believe that InfiniBand would have to copy the entire buffer before starting the operation. It does not seem to be beneficial to bury that kind of performance anomaly. It would be sort of like having MPI_Sendrecv with an "in place" option. The best a network could ever do is to break it up into some "safe" size for the network and do it in pieces with round trips after each piece completed.
>> -----Original Message-----
>> From: Jeff Hammond [mailto:jeff.science at gmail.com]
>> Sent: Thursday, March 31, 2011 8:25 PM
>> To: MPI 3.0 Remote Memory Access working group
>> Cc: Underwood, Keith D
>> Subject: Re: [Mpi3-rma] GetAccumulate restriction needed?
>> What are the use cases that make implementing this difficult? I can
>> argue both sides of this, so I'm curious what you think is the
>> On Thu, Mar 31, 2011 at 9:17 PM, Underwood, Keith D
>> <keith.d.underwood at intel.com> wrote:
>> > Hi All,
>> > Brian and I were having a discussion about something else today and
>> > that we weren't sure of something very important: is there anywhere
>> in the
>> > one-sided chapter that prohibits the source buffer of the
>> GetAccumulate and
>> > the reply buffer of the GetAccumulate from being the same buffer? It
>> > like this should be prohibited.
>> > Keith
>> > _______________________________________________
>> > mpi3-rma mailing list
>> > mpi3-rma at lists.mpi-forum.org
>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> Jeff Hammond
>> Argonne Leadership Computing Facility
>> jhammond at alcf.anl.gov / (630) 252-5381
Argonne Leadership Computing Facility
jhammond at alcf.anl.gov / (630) 252-5381
More information about the mpiwg-rma