[mpiwg-rma] [EXTERNAL] Re: same_op_no_op and SHMEM
Barrett, Brian W
bwbarre at sandia.gov
Thu Oct 24 13:46:50 CDT 2013
Or "I will only do gets and compare-and-swap", which is something a couple
of graph codes I've looked at do. But I agree, we probably should have
made it REPLACE, NO_OP, and 1 other op or not added it at all. Sigh :).
Brian
On 10/24/13 11:39 AM, "Jeff Hammond" <jeff.science at gmail.com> wrote:
>I read same_op_no_op as "I will use only MPI_REPLACE and MPI_NO_OP"
>i.e. give me nothing more than atomic Put/Get, I do not want to
>actually accumulate anything.
>
>Jeff
>
>On Thu, Oct 24, 2013 at 12:34 PM, Underwood, Keith D
><keith.d.underwood at intel.com> wrote:
>> Yes, that's the motivation, but I'm not sure if anybody does atomics
>>without puts.... It seems to me like we should have included
>>MPI_REPLACE in that list
>>
>>> -----Original Message-----
>>> From: mpiwg-rma [mailto:mpiwg-rma-bounces at lists.mpi-forum.org] On
>>> Behalf Of Pavan Balaji
>>> Sent: Thursday, October 24, 2013 1:33 PM
>>> To: MPI WG Remote Memory Access working group
>>> Subject: Re: [mpiwg-rma] same_op_no_op and SHMEM
>>>
>>>
>>> The motivation was that it's hard to maintain atomicity when different
>>> operations are done. For example, if the hardware only supports some
>>> atomic operations, but not all, this might result in some operations
>>> happening in hardware and some in software making atomicity hard. In
>>>such
>>> cases, the MPI implementation might need to fall back to software-only
>>> implementations.
>>>
>>> -- Pavan
>>>
>>> On Oct 24, 2013, at 12:23 PM, Jeff Hammond <jeff.science at gmail.com>
>>> wrote:
>>>
>>> > I recall that Brian and/or Keith wanted same_op_no_op because of
>>> > SHMEM. However, SHMEM requires the use of MPI_NO_OP (for atomic
>>> Get
>>> > via Get_accumulate), MPI_REPLACE (for atomic Put via Accumulate) and
>>> > MPI_SUM (for add, fadd, inc and finc). So what is the benefit of
>>> > same_op_no_op to SHMEM? Perhaps I remember completely wrong and
>>> the
>>> > motivation was something that does not use the latter atomics. Or
>>> > perhaps it is common for SHMEM codes to not use these and thus the
>>> > assumption is MPI_SUM can be ignored.
>>> >
>>> > Jeff
>>> >
>>> > --
>>> > Jeff Hammond
>>> > jeff.science at gmail.com
>>> > _______________________________________________
>>> > mpiwg-rma mailing list
>>> > mpiwg-rma at lists.mpi-forum.org
>>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>
>>> --
>>> Pavan Balaji
>>> http://www.mcs.anl.gov/~balaji
>>>
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
>
>
>--
>Jeff Hammond
>jeff.science at gmail.com
>_______________________________________________
>mpiwg-rma mailing list
>mpiwg-rma at lists.mpi-forum.org
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
--
Brian W. Barrett
Scalable System Software Group
Sandia National Laboratories
More information about the mpiwg-rma
mailing list