[Mpi3-rma] RMA proposal 1 update

Jeff Hammond jeff.science at gmail.com
Wed May 19 20:22:55 CDT 2010


Can I mix that call with other sync mechanisms?

So I implement GA by calling fence inside of GA_Create to expose the  
window and use fence+barrier for GA_Sync, but can I mix in lock and  
unlock as well as the forthcoming p2p flush (as I can do in GA/ARMCI  
now)?

The standard presents three synchronization schemes. It does not  
suggest one can intermix them at will.

Jeff

Sent from my iPhone

On May 19, 2010, at 2:58 PM, "Underwood, Keith D" <keith.d.underwood at intel.com 
 > wrote:

> Jeff,
>
> Another question for you:  If you are going to call  
> MPI_Win_all_flush_all, why not just use active target and call  
> MPI_Win_fence?
>
> Keith
>
>> -----Original Message-----
>> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
>> Sent: Sunday, May 16, 2010 7:27 PM
>> To: MPI 3.0 Remote Memory Access working group
>> Subject: Re: [Mpi3-rma] RMA proposal 1 update
>>
>> Tortsten,
>>
>> There seemed to be decent agreement on adding MPI_Win_all_flush_all
>> (equivalent to MPI_Win_flush_all called from every rank in the
>> communicator associated with the window) since this function can be
>> implemented far more efficiently as a collective than the equivalent
>> point-wise function calls.
>>
>> Is there a problem with adding this to your proposal?
>>
>> Jeff
>>
>> On Sun, May 16, 2010 at 12:48 AM, Torsten Hoefler <htor at illinois.edu>
>> wrote:
>>> Hello all,
>>>
>>> After the discussions at the last Forum I updated the group's first
>>> proposal.
>>>
>>> The proposal (one-side-2.pdf) is attached to the wiki page
>>> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/RmaWikiPage
>>>
>>> The changes with regards to the last version are:
>>>
>>> 1) added MPI_NOOP to MPI_Get_accumulate and MPI_Accumulate_get
>>>
>>> 2) (re)added MPI_Win_flush and MPI_Win_flush_all to passive target
>> mode
>>>
>>> Some remarks:
>>>
>>> 1) We didn't straw-vote on MPI_Accumulate_get, so this function  
>>> might
>>>   go. The removal would be very clean.
>>>
>>> 2) Should we allow MPI_NOOP in MPI_Accumulate (this does not make
>> sense
>>>   and is incorrect in my current proposal)
>>>
>>> 3) Should we allow MPI_REPLACE in
>> MPI_Get_accumulate/MPI_Accumulate_get?
>>>   (this would make sense and is allowed in the current proposal but
>> we
>>>   didn't talk about it in the group)
>>>
>>>
>>> All the Best,
>>>  Torsten
>>>
>>> --
>>>  bash$ :(){ :|:&};: --------------------- http://www.unixer.de/  
>>> -----
>>> Torsten Hoefler         | Research Associate
>>> Blue Waters Directorate | University of Illinois
>>> 1205 W Clark Street     | Urbana, IL, 61801
>>> NCSA Building           | +01 (217) 244-7736
>>> _______________________________________________
>>> mpi3-rma mailing list
>>> mpi3-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>>>
>>
>>
>>
>> --
>> Jeff Hammond
>> Argonne Leadership Computing Facility
>> jhammond at mcs.anl.gov / (630) 252-5381
>> http://www.linkedin.com/in/jeffhammond
>>
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma



More information about the mpiwg-rma mailing list