[Mpi3-rma] FW: MPI_Accumulate

Jeff Hammond jhammond at alcf.anl.gov
Tue Oct 2 11:51:28 CDT 2012


MPI-3 11.7.1 describes the atomicity of accumulate.

Jeff

On Tue, Oct 2, 2012 at 11:48 AM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:
> My understanding has always been that MPI_Accumulate was and is atomic
> at granularity of the underlying built-in datatype, as opposed to
> byte-wise atomic like MPI_Put.  This is the whole reason that
> MPI_Accumulate w/ MPI_REPLACE exists.  Without the additional
> atomicity guarantee, that feature would be pointless.
>
> Jeff
>
> On Tue, Oct 2, 2012 at 11:38 AM, Richard Graham <richardg at mellanox.com> wrote:
>> resending
>>
>>
>>
>> From: Richard Graham
>> Sent: Tuesday, October 02, 2012 12:22 PM
>> To: MPI 3.0 Remote Memory Access working group
>> (mpi3-rma at lists.mpi-forum.org)
>> Subject: MPI_Accumulate
>>
>>
>>
>> What are the requirements placed on MPI_Accumulate if more than one mpi
>> process tries to update the same location ?  Does MPI provide any
>> consistency promises, or is it up to the application to guarantee these ?  I
>> see that the get accumulate routines are defined to be atomic, but don’t see
>> the same requirement for accumulate.
>>
>>
>>
>> Thanks,
>>
>> Rich
>>
>>
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
>
>
> --
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute
> jhammond at alcf.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond



-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond




More information about the mpiwg-rma mailing list