[Mpi3-rma] Alternative Proposal for Shared Memory Support

Rajeev Thakur thakur at mcs.anl.gov
Wed Mar 16 08:09:56 CDT 2011


Right. Some things that need to be touched up are that flush can only be called within lock-unlock currently. Also flush takes a target rank, which is the rank passed to the put/get. Flush_all has the notion of all ranks used in preceding puts/gets. Such things would need to be addressed.

Rajeev


On Mar 15, 2011, at 11:22 PM, Torsten Hoefler wrote:

> On Tue, Mar 15, 2011 at 09:39:24PM -0500, Rajeev Thakur wrote:
>> That's what I meant in my first mail. We need to say whether this example will work or not.
>> 
>> A=10
>> barrier         barrier
>>                    x = A
>> 
>> If not, which synchronization functions need to be used and how? The
>> current text does not automatically cover this example.
> Right, I never claimed that the proposal is complete :-). It's basically
> a quick poll to see if there are major flaws and I should retract or if
> it is worth discussing at the Forum (to flesh out such things like you
> mentioned).
> 
> To answer your question: I think this should either be:
> 
> A=10
> MPI_Win_flush
> barrier           barrier
>                  x=A
> 
> This would basically insert a membar in MPI_Win_flush which makes
> sure that A is committed before the barrier completes (a barrier flag is
> set and read).
> 
> This fits our current semantics where we suggest that win_flush can be
> used for ordering. Keep in mind that it's still the unified model, i.e.,
> the value would commit eventually, the flush is just needed to enforce
> ordering with regards to the barrier.
> 
> Thanks & Best,
>  Torsten
> 
> -- 
> bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
> "A couple of months in the laboratory can frequently save a couple of
> hours in the library." - Westheimer (contemporary)
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma





More information about the mpiwg-rma mailing list