[Mpi3-rma] Proposal 1 discussion points for next telecon

William Gropp wgropp at illinois.edu
Tue Oct 26 14:41:19 CDT 2010


Names are *always* discussed *only* after the semantics are settled.

Bill

On Oct 25, 2010, at 11:35 PM, Pavan Balaji wrote:

>
> On 10/25/2010 10:58 PM, Torsten Hoefler wrote:
>> Hi Pavan,
>>> This proposal doesn't have an atomic GET operation.
>>>
>>> MPI_ACCUMULATE with MPI_REPLACE is an atomic PUT.
>>>
>>> MPI_GET_ACCUMULATE with MPI_NO_OP does not work as an atomic GET  
>>> as it
>>> does not take more than 1 count, or non-predefined datatypes.
>> Correct, we had count>1 and ddts in an earlier version and changed it
>> after heated discussions about buffering. I forgot to add it to the
>> discussion items.  I have no huge problems to allow ddts and  
>> counts>1,
>> however, I believe Brian and Keith were against it.
>>
>> We should see if an advice to users/implementers that MPI_NO_OP is a
>> special case that doesn't require buffering and that all other
>> operations might be really really slow with large data.
>>
>> Please consider this item 7 on the discussion list!
>
> Yes, I remember the discussion and it is a valid argument -- it was  
> with
> respect to buffering requirements because of retransmissions (for
> reliability).
>
> Note that I'm not proposing that MPI_GET_ACCUMULATE be made more
> generic. I'm just saying that we need a (possible separate) function  
> to
> do atomic GETs. However, if MPI_GET_ACCUMULATE is not as generic as  
> GET,
> it should be given a different name (though, if you want to discuss
> names later, that's fine).
>
>  -- Pavan
>
> -- 
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma

William Gropp
Deputy Director for Research
Institute for Advanced Computing Applications and Technologies
Paul and Cynthia Saylor Professor of Computer Science
University of Illinois Urbana-Champaign







More information about the mpiwg-rma mailing list