[mpiwg-rma] [EXTERNAL] Re: same_op_no_op and SHMEM

Jim Dinan james.dinan at gmail.com
Thu Oct 24 14:47:56 CDT 2013


Amongst our operations are such diverse elements as ...  :)

 ~Jim.

http://en.wikipedia.org/wiki/The_Spanish_Inquisition_(Monty_Python)


On Thu, Oct 24, 2013 at 2:59 PM, Barrett, Brian W <bwbarre at sandia.gov>wrote:

> I would have no objection to adding yet another info key.  I think if we
> keep at this for another year, we can make sure we have the longest
> pre-defined info key in the spec.
>
> I admit to having very little medium term memory; which is the
> type-homogeneity suggestion?
>
> Brian
>
>
> On 10/24/13 12:53 PM, "Jeff Hammond" <jeff.science at gmail.com> wrote:
>
> >Honestly, I think REPLACE+NO_OP is a useful option, I just think
> >REPLACE+NO_OP+<SUM or ...> is _also_ a useful option.  Why don't we
> >just turn our frowns upside down and add this to the standard?
> >
> >Does anyone object to info = same_op_no_op_replace?
> >
> >I would appreciate any feedback on my type-homogeneity suggestion as
> >well.  Do people agree that is worth adding?  I imagine that there is
> >hardware that can do e.g. sum+long but not sum+double, especially if
> >it has to be atomic.
> >
> >Jeff
> >
> >On Thu, Oct 24, 2013 at 1:46 PM, Barrett, Brian W <bwbarre at sandia.gov>
> >wrote:
> >> Or "I will only do gets and compare-and-swap", which is something a
> >>couple
> >> of graph codes I've looked at do.  But I agree, we probably should have
> >> made it REPLACE, NO_OP, and 1 other op or not added it at all.  Sigh :).
> >>
> >> Brian
> >>
> >> On 10/24/13 11:39 AM, "Jeff Hammond" <jeff.science at gmail.com> wrote:
> >>
> >>>I read same_op_no_op as "I will use only MPI_REPLACE and MPI_NO_OP"
> >>>i.e. give me nothing more than atomic Put/Get, I do not want to
> >>>actually accumulate anything.
> >>>
> >>>Jeff
> >>>
> >>>On Thu, Oct 24, 2013 at 12:34 PM, Underwood, Keith D
> >>><keith.d.underwood at intel.com> wrote:
> >>>> Yes, that's the motivation, but I'm not sure if anybody does atomics
> >>>>without puts....  It seems to me like we should have included
> >>>>MPI_REPLACE in that list
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: mpiwg-rma [mailto:mpiwg-rma-bounces at lists.mpi-forum.org] On
> >>>>> Behalf Of Pavan Balaji
> >>>>> Sent: Thursday, October 24, 2013 1:33 PM
> >>>>> To: MPI WG Remote Memory Access working group
> >>>>> Subject: Re: [mpiwg-rma] same_op_no_op and SHMEM
> >>>>>
> >>>>>
> >>>>> The motivation was that it's hard to maintain atomicity when
> >>>>>different
> >>>>> operations are done.  For example, if the hardware only supports some
> >>>>> atomic operations, but not all, this might result in some operations
> >>>>> happening in hardware and some in software making atomicity hard.  In
> >>>>>such
> >>>>> cases, the MPI implementation might need to fall back to
> >>>>>software-only
> >>>>> implementations.
> >>>>>
> >>>>>   -- Pavan
> >>>>>
> >>>>> On Oct 24, 2013, at 12:23 PM, Jeff Hammond <jeff.science at gmail.com>
> >>>>> wrote:
> >>>>>
> >>>>> > I recall that Brian and/or Keith wanted same_op_no_op because of
> >>>>> > SHMEM.  However, SHMEM requires the use of MPI_NO_OP (for atomic
> >>>>> Get
> >>>>> > via Get_accumulate), MPI_REPLACE (for atomic Put via Accumulate)
> >>>>>and
> >>>>> > MPI_SUM (for add, fadd, inc and finc).  So what is the benefit of
> >>>>> > same_op_no_op to SHMEM?  Perhaps I remember completely wrong and
> >>>>> the
> >>>>> > motivation was something that does not use the latter atomics.  Or
> >>>>> > perhaps it is common for SHMEM codes to not use these and thus the
> >>>>> > assumption is MPI_SUM can be ignored.
> >>>>> >
> >>>>> > Jeff
> >>>>> >
> >>>>> > --
> >>>>> > Jeff Hammond
> >>>>> > jeff.science at gmail.com
> >>>>> > _______________________________________________
> >>>>> > mpiwg-rma mailing list
> >>>>> > mpiwg-rma at lists.mpi-forum.org
> >>>>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >>>>>
> >>>>> --
> >>>>> Pavan Balaji
> >>>>> http://www.mcs.anl.gov/~balaji
> >>>>>
> >>>>> _______________________________________________
> >>>>> mpiwg-rma mailing list
> >>>>> mpiwg-rma at lists.mpi-forum.org
> >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >>>> _______________________________________________
> >>>> mpiwg-rma mailing list
> >>>> mpiwg-rma at lists.mpi-forum.org
> >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >>>
> >>>
> >>>
> >>>--
> >>>Jeff Hammond
> >>>jeff.science at gmail.com
> >>>_______________________________________________
> >>>mpiwg-rma mailing list
> >>>mpiwg-rma at lists.mpi-forum.org
> >>>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >>>
> >>
> >>
> >> --
> >>   Brian W. Barrett
> >>   Scalable System Software Group
> >>   Sandia National Laboratories
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> mpiwg-rma mailing list
> >> mpiwg-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >
> >
> >
> >--
> >Jeff Hammond
> >jeff.science at gmail.com
> >_______________________________________________
> >mpiwg-rma mailing list
> >mpiwg-rma at lists.mpi-forum.org
> >http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >
>
>
> --
>   Brian W. Barrett
>   Scalable System Software Group
>   Sandia National Laboratories
>
>
>
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20131024/1d238179/attachment-0001.html>


More information about the mpiwg-rma mailing list