[Mpi3-rma] FW: MPI_Accumulate

Pavan Balaji balaji at mcs.anl.gov
Tue Oct 2 13:29:18 CDT 2012


Let's not mix what an implementation might (and will likely) do with what the standard specifies.

--
Pavan Balaji
http://www.mcs.anl.gov/~balaji

On Oct 2, 2012, at 1:26 PM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:

> Okay, sorry, I was assuming that no one makes hardware that is less
> than byte-wise atomic.  I suppose an implementation could do as I
> suggested at some point and write the bit representation of "POOP" to
> memory for conflicting writes, but this seems like it would be slower
> than just doing conflicting writes and letting the memory controller
> figure it out.
> 
> Jeff
> 
> On Tue, Oct 2, 2012 at 1:22 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>> 
>> Put is not byte wise atomic.  It is not atomic at all.  Only the accumulate-style operations are predefined datatype granularity atomic.
>> 
>> --
>> Pavan Balaji
>> http://www.mcs.anl.gov/~balaji
>> 
>> On Oct 2, 2012, at 11:48 AM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:
>> 
>>> My understanding has always been that MPI_Accumulate was and is atomic
>>> at granularity of the underlying built-in datatype, as opposed to
>>> byte-wise atomic like MPI_Put.  This is the whole reason that
>>> MPI_Accumulate w/ MPI_REPLACE exists.  Without the additional
>>> atomicity guarantee, that feature would be pointless.
>>> 
>>> Jeff
>>> 
>>> On Tue, Oct 2, 2012 at 11:38 AM, Richard Graham <richardg at mellanox.com> wrote:
>>>> resending
>>>> 
>>>> 
>>>> 
>>>> From: Richard Graham
>>>> Sent: Tuesday, October 02, 2012 12:22 PM
>>>> To: MPI 3.0 Remote Memory Access working group
>>>> (mpi3-rma at lists.mpi-forum.org)
>>>> Subject: MPI_Accumulate
>>>> 
>>>> 
>>>> 
>>>> What are the requirements placed on MPI_Accumulate if more than one mpi
>>>> process tries to update the same location ?  Does MPI provide any
>>>> consistency promises, or is it up to the application to guarantee these ?  I
>>>> see that the get accumulate routines are defined to be atomic, but don’t see
>>>> the same requirement for accumulate.
>>>> 
>>>> 
>>>> 
>>>> Thanks,
>>>> 
>>>> Rich
>>>> 
>>>> 
>>>> _______________________________________________
>>>> mpi3-rma mailing list
>>>> mpi3-rma at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>>> 
>>> 
>>> 
>>> --
>>> Jeff Hammond
>>> Argonne Leadership Computing Facility
>>> University of Chicago Computation Institute
>>> jhammond at alcf.anl.gov / (630) 252-5381
>>> http://www.linkedin.com/in/jeffhammond
>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>>> 
>>> _______________________________________________
>>> mpi3-rma mailing list
>>> mpi3-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> 
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> 
> 
> 
> -- 
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute
> jhammond at alcf.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> 
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma




More information about the mpiwg-rma mailing list