[Mpi3-rma] FW: MPI_Accumulate

Underwood, Keith D keith.d.underwood at intel.com
Tue Oct 2 13:40:07 CDT 2012


Yeah, because I was certainly advocating that implementations write  the string "Jeff said to write POOP here" over and over into conflicting write areas.  After all, we wouldn't want users to get used to some non-compliant behavior ;-)

> -----Original Message-----
> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> bounces at lists.mpi-forum.org] On Behalf Of Pavan Balaji
> Sent: Tuesday, October 02, 2012 2:29 PM
> To: MPI 3.0 Remote Memory Access working group
> Cc: MPI 3.0 Remote Memory Access working group
> Subject: Re: [Mpi3-rma] FW: MPI_Accumulate
> 
> 
> Let's not mix what an implementation might (and will likely) do with what the
> standard specifies.
> 
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> 
> On Oct 2, 2012, at 1:26 PM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:
> 
> > Okay, sorry, I was assuming that no one makes hardware that is less
> > than byte-wise atomic.  I suppose an implementation could do as I
> > suggested at some point and write the bit representation of "POOP" to
> > memory for conflicting writes, but this seems like it would be slower
> > than just doing conflicting writes and letting the memory controller
> > figure it out.
> >
> > Jeff
> >
> > On Tue, Oct 2, 2012 at 1:22 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
> >>
> >> Put is not byte wise atomic.  It is not atomic at all.  Only the accumulate-
> style operations are predefined datatype granularity atomic.
> >>
> >> --
> >> Pavan Balaji
> >> http://www.mcs.anl.gov/~balaji
> >>
> >> On Oct 2, 2012, at 11:48 AM, Jeff Hammond <jhammond at alcf.anl.gov>
> wrote:
> >>
> >>> My understanding has always been that MPI_Accumulate was and is
> >>> atomic at granularity of the underlying built-in datatype, as
> >>> opposed to byte-wise atomic like MPI_Put.  This is the whole reason
> >>> that MPI_Accumulate w/ MPI_REPLACE exists.  Without the additional
> >>> atomicity guarantee, that feature would be pointless.
> >>>
> >>> Jeff
> >>>
> >>> On Tue, Oct 2, 2012 at 11:38 AM, Richard Graham
> <richardg at mellanox.com> wrote:
> >>>> resending
> >>>>
> >>>>
> >>>>
> >>>> From: Richard Graham
> >>>> Sent: Tuesday, October 02, 2012 12:22 PM
> >>>> To: MPI 3.0 Remote Memory Access working group
> >>>> (mpi3-rma at lists.mpi-forum.org)
> >>>> Subject: MPI_Accumulate
> >>>>
> >>>>
> >>>>
> >>>> What are the requirements placed on MPI_Accumulate if more than
> one
> >>>> mpi process tries to update the same location ?  Does MPI provide
> >>>> any consistency promises, or is it up to the application to
> >>>> guarantee these ?  I see that the get accumulate routines are
> >>>> defined to be atomic, but don’t see the same requirement for
> accumulate.
> >>>>
> >>>>
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Rich
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> mpi3-rma mailing list
> >>>> mpi3-rma at lists.mpi-forum.org
> >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >>>
> >>>
> >>>
> >>> --
> >>> Jeff Hammond
> >>> Argonne Leadership Computing Facility University of Chicago
> >>> Computation Institute jhammond at alcf.anl.gov / (630) 252-5381
> >>> http://www.linkedin.com/in/jeffhammond
> >>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> >>>
> >>> _______________________________________________
> >>> mpi3-rma mailing list
> >>> mpi3-rma at lists.mpi-forum.org
> >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >>
> >> _______________________________________________
> >> mpi3-rma mailing list
> >> mpi3-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> >
> >
> > --
> > Jeff Hammond
> > Argonne Leadership Computing Facility
> > University of Chicago Computation Institute jhammond at alcf.anl.gov /
> > (630) 252-5381 http://www.linkedin.com/in/jeffhammond
> > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> >
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> 
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma




More information about the mpiwg-rma mailing list