[Mpi3-rma] RMA proposal 1 update
Pavan Balaji
balaji at mcs.anl.gov
Tue May 18 14:04:51 CDT 2010
On 05/18/2010 01:52 PM, Underwood, Keith D wrote:
> Now you are just trying to be difficult... First, your scenario is not legal. You have to call a local MPI_Lock()/MPI_Unlock() before that data is visible in the private window to allow loads and stores. Even accessing that item that was Put over NIC1 is undefined until the source has done a completion operation.
Sorry, I don't mean to. Relying on network ordering till memory just
seems hacky. So, I'm trying to see if there are cases where the network
doesn't have full control on when things are written to memory.
> Even then, I think you are discussing an ordering problem that exists in the base standard: completing an MPI_Unlock() implies remote completing. Real remote completion. Until MPI_Unlock() completes, there is no guarantee of ordering between anything. MPI_flush() does not add to this issue.
Hmm.. Maybe I don't understand MPI_Flush very well then. Here's the
example case I was thinking of:
MPI_Win_lock(target = 1, SHARED);
if (rank == 1) {
MPI_Put(win, target = 1, foo = 100, ...);
MPI_Flush(win, target = 1, ...);
MPI_Get_accumulate(win, target = 1, &bar, ...);
}
else if (rank == 0) {
do {
MPI_Get_accumulate(win, target = 1, &bar, ...);
} while (bar != 1); /* Get the mutex */
MPI_Get(win, target = 1, &foo, ...);
}
MPI_Win_unlock(target = 1);
So, the question is, is process 1 guaranteed to get foo = 100 in this
case? Note that there are no direct load/stores here, so everything can
happen in shared lock mode.
-- Pavan
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpiwg-rma
mailing list