[mpiwg-rma] [EXTERNAL] Re: same_op_no_op and SHMEM

Jeff Hammond jeff.science at gmail.com
Thu Oct 24 14:24:18 CDT 2013


> I would have no objection to adding yet another info key.  I think if we
> keep at this for another year, we can make sure we have the longest
> pre-defined info key in the spec.

Or we can focus on the values.  Instead of "same_op_no_op_replace =
true" we can have "op_list = no_op,replace,sum" and let the
implementation instantiate same_op_no_op_replace on its own.

However, the German method of long-word generation
(Deutschverfahrenlangewortschöpfung for those keeping score at home*)
has some appeal to me :-)

> I admit to having very little medium term memory; which is the
> type-homogeneity suggestion?

Medium-term meaning in the last 2 hours? :-)

It addresses the lack of type-specification in MPI window creating.
If I can say at window allocation time that I only want to use
MPI_INT, then maybe the implementation can avoid a software
implementation that might be required if I was to use MPI_FLOAT, for
example.  In my experience, NICs are more likely to support
fixed-point arithmetic in hardware than floating-point.

Jeff

* I made this up.  And $20 says Jim will call me on it.

> On 10/24/13 12:53 PM, "Jeff Hammond" <jeff.science at gmail.com> wrote:
>
>>Honestly, I think REPLACE+NO_OP is a useful option, I just think
>>REPLACE+NO_OP+<SUM or ...> is _also_ a useful option.  Why don't we
>>just turn our frowns upside down and add this to the standard?
>>
>>Does anyone object to info = same_op_no_op_replace?
>>
>>I would appreciate any feedback on my type-homogeneity suggestion as
>>well.  Do people agree that is worth adding?  I imagine that there is
>>hardware that can do e.g. sum+long but not sum+double, especially if
>>it has to be atomic.
>>
>>Jeff
>>
>>On Thu, Oct 24, 2013 at 1:46 PM, Barrett, Brian W <bwbarre at sandia.gov>
>>wrote:
>>> Or "I will only do gets and compare-and-swap", which is something a
>>>couple
>>> of graph codes I've looked at do.  But I agree, we probably should have
>>> made it REPLACE, NO_OP, and 1 other op or not added it at all.  Sigh :).
>>>
>>> Brian
>>>
>>> On 10/24/13 11:39 AM, "Jeff Hammond" <jeff.science at gmail.com> wrote:
>>>
>>>>I read same_op_no_op as "I will use only MPI_REPLACE and MPI_NO_OP"
>>>>i.e. give me nothing more than atomic Put/Get, I do not want to
>>>>actually accumulate anything.
>>>>
>>>>Jeff
>>>>
>>>>On Thu, Oct 24, 2013 at 12:34 PM, Underwood, Keith D
>>>><keith.d.underwood at intel.com> wrote:
>>>>> Yes, that's the motivation, but I'm not sure if anybody does atomics
>>>>>without puts....  It seems to me like we should have included
>>>>>MPI_REPLACE in that list
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: mpiwg-rma [mailto:mpiwg-rma-bounces at lists.mpi-forum.org] On
>>>>>> Behalf Of Pavan Balaji
>>>>>> Sent: Thursday, October 24, 2013 1:33 PM
>>>>>> To: MPI WG Remote Memory Access working group
>>>>>> Subject: Re: [mpiwg-rma] same_op_no_op and SHMEM
>>>>>>
>>>>>>
>>>>>> The motivation was that it's hard to maintain atomicity when
>>>>>>different
>>>>>> operations are done.  For example, if the hardware only supports some
>>>>>> atomic operations, but not all, this might result in some operations
>>>>>> happening in hardware and some in software making atomicity hard.  In
>>>>>>such
>>>>>> cases, the MPI implementation might need to fall back to
>>>>>>software-only
>>>>>> implementations.
>>>>>>
>>>>>>   -- Pavan
>>>>>>
>>>>>> On Oct 24, 2013, at 12:23 PM, Jeff Hammond <jeff.science at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> > I recall that Brian and/or Keith wanted same_op_no_op because of
>>>>>> > SHMEM.  However, SHMEM requires the use of MPI_NO_OP (for atomic
>>>>>> Get
>>>>>> > via Get_accumulate), MPI_REPLACE (for atomic Put via Accumulate)
>>>>>>and
>>>>>> > MPI_SUM (for add, fadd, inc and finc).  So what is the benefit of
>>>>>> > same_op_no_op to SHMEM?  Perhaps I remember completely wrong and
>>>>>> the
>>>>>> > motivation was something that does not use the latter atomics.  Or
>>>>>> > perhaps it is common for SHMEM codes to not use these and thus the
>>>>>> > assumption is MPI_SUM can be ignored.
>>>>>> >
>>>>>> > Jeff
>>>>>> >
>>>>>> > --
>>>>>> > Jeff Hammond
>>>>>> > jeff.science at gmail.com
>>>>>> > _______________________________________________
>>>>>> > mpiwg-rma mailing list
>>>>>> > mpiwg-rma at lists.mpi-forum.org
>>>>>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>>>
>>>>>> --
>>>>>> Pavan Balaji
>>>>>> http://www.mcs.anl.gov/~balaji
>>>>>>
>>>>>> _______________________________________________
>>>>>> mpiwg-rma mailing list
>>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>> _______________________________________________
>>>>> mpiwg-rma mailing list
>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>
>>>>
>>>>
>>>>--
>>>>Jeff Hammond
>>>>jeff.science at gmail.com
>>>>_______________________________________________
>>>>mpiwg-rma mailing list
>>>>mpiwg-rma at lists.mpi-forum.org
>>>>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>
>>>
>>>
>>> --
>>>   Brian W. Barrett
>>>   Scalable System Software Group
>>>   Sandia National Laboratories
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>
>>
>>
>>--
>>Jeff Hammond
>>jeff.science at gmail.com
>>_______________________________________________
>>mpiwg-rma mailing list
>>mpiwg-rma at lists.mpi-forum.org
>>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>
>
>
> --
>   Brian W. Barrett
>   Scalable System Software Group
>   Sandia National Laboratories
>
>
>
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma



-- 
Jeff Hammond
jeff.science at gmail.com



More information about the mpiwg-rma mailing list