<div dir="ltr">Amongst our operations are such diverse elements as ... :)<div><br></div><div> ~Jim.</div><div><br></div><div><a href="http://en.wikipedia.org/wiki/The_Spanish_Inquisition_(Monty_Python)">http://en.wikipedia.org/wiki/The_Spanish_Inquisition_(Monty_Python)</a></div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Oct 24, 2013 at 2:59 PM, Barrett, Brian W <span dir="ltr"><<a href="mailto:bwbarre@sandia.gov" target="_blank">bwbarre@sandia.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I would have no objection to adding yet another info key. I think if we<br>
keep at this for another year, we can make sure we have the longest<br>
pre-defined info key in the spec.<br>
<br>
I admit to having very little medium term memory; which is the<br>
type-homogeneity suggestion?<br>
<span class="HOEnZb"><font color="#888888"><br>
Brian<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On 10/24/13 12:53 PM, "Jeff Hammond" <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br>
<br>
>Honestly, I think REPLACE+NO_OP is a useful option, I just think<br>
>REPLACE+NO_OP+<SUM or ...> is _also_ a useful option. Why don't we<br>
>just turn our frowns upside down and add this to the standard?<br>
><br>
>Does anyone object to info = same_op_no_op_replace?<br>
><br>
>I would appreciate any feedback on my type-homogeneity suggestion as<br>
>well. Do people agree that is worth adding? I imagine that there is<br>
>hardware that can do e.g. sum+long but not sum+double, especially if<br>
>it has to be atomic.<br>
><br>
>Jeff<br>
><br>
>On Thu, Oct 24, 2013 at 1:46 PM, Barrett, Brian W <<a href="mailto:bwbarre@sandia.gov">bwbarre@sandia.gov</a>><br>
>wrote:<br>
>> Or "I will only do gets and compare-and-swap", which is something a<br>
>>couple<br>
>> of graph codes I've looked at do. But I agree, we probably should have<br>
>> made it REPLACE, NO_OP, and 1 other op or not added it at all. Sigh :).<br>
>><br>
>> Brian<br>
>><br>
>> On 10/24/13 11:39 AM, "Jeff Hammond" <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br>
>><br>
>>>I read same_op_no_op as "I will use only MPI_REPLACE and MPI_NO_OP"<br>
>>>i.e. give me nothing more than atomic Put/Get, I do not want to<br>
>>>actually accumulate anything.<br>
>>><br>
>>>Jeff<br>
>>><br>
>>>On Thu, Oct 24, 2013 at 12:34 PM, Underwood, Keith D<br>
>>><<a href="mailto:keith.d.underwood@intel.com">keith.d.underwood@intel.com</a>> wrote:<br>
>>>> Yes, that's the motivation, but I'm not sure if anybody does atomics<br>
>>>>without puts.... It seems to me like we should have included<br>
>>>>MPI_REPLACE in that list<br>
>>>><br>
>>>>> -----Original Message-----<br>
>>>>> From: mpiwg-rma [mailto:<a href="mailto:mpiwg-rma-bounces@lists.mpi-forum.org">mpiwg-rma-bounces@lists.mpi-forum.org</a>] On<br>
>>>>> Behalf Of Pavan Balaji<br>
>>>>> Sent: Thursday, October 24, 2013 1:33 PM<br>
>>>>> To: MPI WG Remote Memory Access working group<br>
>>>>> Subject: Re: [mpiwg-rma] same_op_no_op and SHMEM<br>
>>>>><br>
>>>>><br>
>>>>> The motivation was that it's hard to maintain atomicity when<br>
>>>>>different<br>
>>>>> operations are done. For example, if the hardware only supports some<br>
>>>>> atomic operations, but not all, this might result in some operations<br>
>>>>> happening in hardware and some in software making atomicity hard. In<br>
>>>>>such<br>
>>>>> cases, the MPI implementation might need to fall back to<br>
>>>>>software-only<br>
>>>>> implementations.<br>
>>>>><br>
>>>>> -- Pavan<br>
>>>>><br>
>>>>> On Oct 24, 2013, at 12:23 PM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>><br>
>>>>> wrote:<br>
>>>>><br>
>>>>> > I recall that Brian and/or Keith wanted same_op_no_op because of<br>
>>>>> > SHMEM. However, SHMEM requires the use of MPI_NO_OP (for atomic<br>
>>>>> Get<br>
>>>>> > via Get_accumulate), MPI_REPLACE (for atomic Put via Accumulate)<br>
>>>>>and<br>
>>>>> > MPI_SUM (for add, fadd, inc and finc). So what is the benefit of<br>
>>>>> > same_op_no_op to SHMEM? Perhaps I remember completely wrong and<br>
>>>>> the<br>
>>>>> > motivation was something that does not use the latter atomics. Or<br>
>>>>> > perhaps it is common for SHMEM codes to not use these and thus the<br>
>>>>> > assumption is MPI_SUM can be ignored.<br>
>>>>> ><br>
>>>>> > Jeff<br>
>>>>> ><br>
>>>>> > --<br>
>>>>> > Jeff Hammond<br>
>>>>> > <a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
>>>>> > _______________________________________________<br>
>>>>> > mpiwg-rma mailing list<br>
>>>>> > <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>>>>> > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>>>>><br>
>>>>> --<br>
>>>>> Pavan Balaji<br>
>>>>> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> mpiwg-rma mailing list<br>
>>>>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>>>>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>>>> _______________________________________________<br>
>>>> mpiwg-rma mailing list<br>
>>>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>>>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>>><br>
>>><br>
>>><br>
>>>--<br>
>>>Jeff Hammond<br>
>>><a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
>>>_______________________________________________<br>
>>>mpiwg-rma mailing list<br>
>>><a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>>><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>>><br>
>><br>
>><br>
>> --<br>
>> Brian W. Barrett<br>
>> Scalable System Software Group<br>
>> Sandia National Laboratories<br>
>><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> mpiwg-rma mailing list<br>
>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
><br>
><br>
><br>
>--<br>
>Jeff Hammond<br>
><a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
>_______________________________________________<br>
>mpiwg-rma mailing list<br>
><a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
><br>
<br>
<br>
--<br>
Brian W. Barrett<br>
Scalable System Software Group<br>
Sandia National Laboratories<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
mpiwg-rma mailing list<br>
<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
</div></div></blockquote></div><br></div>