[mpiwg-rma] same_op_no_op
Jeff Hammond
jeff.science at gmail.com
Mon Mar 10 23:22:52 CDT 2014
So MPI-2 denied compatibility between replace and not-replace?
Jeff
Sent from my iPhone
> On Mar 11, 2014, at 12:06 AM, "Balaji, Pavan" <balaji at anl.gov> wrote:
>
>
> It doesn’t break backward compatibility. The info argument is still useful when you don’t want to use replace. I don’t see anything wrong with it.
>
>> On Mar 10, 2014, at 11:01 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>
>> Does this or does this not break BW compatibility w.r.t. MPI-2.2 and
>> did we do it intentionally? Unless we did so intentionally and
>> explicitly, I will argue that the WG screwed up and the info key+val
>> is invalid.
>>
>> Jeff
>>
>>> On Mon, Mar 10, 2014 at 11:03 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>>>
>>> If a hardware can implement MPI_SUM, it should be able to implement MPI_SUM with 0 as well.
>>>
>>> But that’s not a generic solution.
>>>
>>> Jeff: at some point you were planning to bring in a ticket which does more combinations of operations than just same_op and no_op. Maybe it’s worthwhile bringing that up again?
>>>
>>> — Pavan
>>>
>>>> On Mar 10, 2014, at 9:26 PM, Jim Dinan <james.dinan at gmail.com> wrote:
>>>>
>>>> Maybe there's a loophole that I'm forgetting?
>>>>
>>>>
>>>> On Mon, Mar 10, 2014 at 9:43 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>>> How the hell can I do GA or SHMEM then? Roll my own mutexes and commit perf-suicide?
>>>>
>>>> Jeff
>>>>
>>>> Sent from my iPhone
>>>>
>>>>> On Mar 10, 2014, at 8:32 PM, Jim Dinan <james.dinan at gmail.com> wrote:
>>>>>
>>>>> You can't use replace and sum concurrently at a given target address.
>>>>>
>>>>> ~Jim.
>>>>>
>>>>> On Mon, Mar 10, 2014 at 4:30 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>>>> Given the following, how do I use MPI_NO_OP, MPI_REPLACE and MPI_SUM
>>>>> in accumulate/atomic operations in a standard-compliant way?
>>>>>
>>>>> accumulate_ops — if set to same_op, the implementation will assume
>>>>> that all concurrent accumulate calls to the same target address will
>>>>> use the same operation. If set to same_op_no_op, then the
>>>>> implementation will assume that all concurrent accumulate calls to the
>>>>> same target address will use the same operation or MPI_NO_OP. This can
>>>>> eliminate the need to protect access for certain operation types where
>>>>> the hardware can guarantee atomicity. The default is same_op_no_op.
>>>>>
>>>>> We discuss this before and the resolution was not satisfying to me.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Jeff
>>>>>
>>>>> --
>>>>> Jeff Hammond
>>>>> jeff.science at gmail.com
>>>>> _______________________________________________
>>>>> mpiwg-rma mailing list
>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>>
>>>>> _______________________________________________
>>>>> mpiwg-rma mailing list
>>>>> mpiwg-rma at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>
>>>> _______________________________________________
>>>> mpiwg-rma mailing list
>>>> mpiwg-rma at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>>
>>>> _______________________________________________
>>>> mpiwg-rma mailing list
>>>> mpiwg-rma at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>>
>>> _______________________________________________
>>> mpiwg-rma mailing list
>>> mpiwg-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
More information about the mpiwg-rma
mailing list