[mpiwg-rma] same_op_no_op

Jeff Hammond jeff.science at gmail.com
Mon Mar 10 20:43:37 CDT 2014


How the hell can I do GA or SHMEM then? Roll my own mutexes and commit perf-suicide?

Jeff 

Sent from my iPhone

> On Mar 10, 2014, at 8:32 PM, Jim Dinan <james.dinan at gmail.com> wrote:
> 
> You can't use replace and sum concurrently at a given target address.
> 
>  ~Jim.
> 
>> On Mon, Mar 10, 2014 at 4:30 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>> Given the following, how do I use MPI_NO_OP, MPI_REPLACE and MPI_SUM
>> in accumulate/atomic operations in a standard-compliant way?
>> 
>> accumulate_ops — if set to same_op, the implementation will assume
>> that all concurrent accumulate calls to the same target address will
>> use the same operation. If set to same_op_no_op, then the
>> implementation will assume that all concurrent accumulate calls to the
>> same target address will use the same operation or MPI_NO_OP. This can
>> eliminate the need to protect access for certain operation types where
>> the hardware can guarantee atomicity. The default is same_op_no_op.
>> 
>> We discuss this before and the resolution was not satisfying to me.
>> 
>> Thanks,
>> 
>> Jeff
>> 
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20140310/84d6b16a/attachment-0001.html>


More information about the mpiwg-rma mailing list