[Mpi3-rma] Next RMA Telecon

Torsten Hoefler htor at illinois.edu
Mon Dec 26 20:48:58 CST 2011


On Mon, Dec 26, 2011 at 08:40:41PM -0600, Pavan Balaji wrote:
>
> On 12/26/2011 12:12 PM, Torsten Hoefler wrote:
>> On Mon, Dec 26, 2011 at 12:22:38AM -0600, Pavan Balaji wrote:
>>>
>>> On 12/09/2011 03:36 PM, Torsten Hoefler wrote:
>>>>> 3) allocate_shared proposal with MPI_Win_lock_all(exclusive)
>>>> - remove MPI_Win_lock_all(exclusive) from the proposal and re-read
>>>
>>> Why was this removed?
>> Brian and others felt that an implementation on a distributed-memory
>> network would put a huge burden on the implementers and may cause other
>> problems. A workaround offering essentially the same functionality by
>> using point-to-point and collective synchronization (i.e., block p-1
>> processes in a barrier while one process updates the shared memory) was
>> identified.
>
> Didn't we already discuss this in one of the working groups (though I  
> forget whether it was RMA or hybrid)?  If I recall correctly, a non-RMA  
> solution was shot down because it doesn't provide the memory barriers  
> that a lock/unlock would do.
>
> Someone also pointed out that we could do a for loop of  
> Win_lock(exclusive) to achieve the same result, but it was not scalable.
Why does:

win_fence()
if(rank == 0) { initialize_window(); MPI_Barrier(); }
else MPI_Barrier();
win_fence()
// all procs read written data here

not work for Ron's use-case? Are you talking about another use-case?

All the Best,
  Torsten

-- 
 bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
Torsten Hoefler         | Performance Modeling and Simulation Lead
Blue Waters Directorate | University of Illinois (UIUC)
1205 W Clark Street     | Urbana, IL, 61801
NCSA Building           | +01 (217) 244-7736



More information about the mpiwg-rma mailing list