[Mpi3-rma] MPI-3 UNIFIED model clarification
Underwood, Keith D
keith.d.underwood at intel.com
Fri Aug 2 15:32:07 CDT 2013
Oh, I think we want it to say that they are identical. I believe that is the only way to let the user actually use it.
Keith
From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-bounces at lists.mpi-forum.org] On Behalf Of Jim Dinan
Sent: Friday, August 02, 2013 11:18 AM
To: Pavan Balaji
Cc: MPI 3.0 Remote Memory Access working group
Subject: Re: [Mpi3-rma] MPI-3 UNIFIED model clarification
Eek, yes, the "public and private copies are identical" text on pg 436 and elsewhere does not convey what we intended. What we probably should have said is something like "public and private copies are stored in the same location."
On Fri, Aug 2, 2013 at 12:22 PM, Pavan Balaji <balaji at mcs.anl.gov<mailto:balaji at mcs.anl.gov>> wrote:
Jim,
This is a good way to reason about UNIFIED, but it's not exactly what the MPI standard states. It says that the public window is the same as the private window and not that they'll eventually be the same. We could change it to say something like this, though.
-- Pavan
On 08/01/2013 09:33 AM, Jim Dinan wrote:
Let's make sure we are using the right language -- Unified does not
guarantee that "public win == private win". It guarantees that they are
*eventually* the same, not immediately and always the same. It is
completely allowable for processes to have inconsistent views of the
window (because there are cached copies of the data and buffered
reads/writes). The question we are debating is whether it is a
reasonable semantic that those inconsistent views become eventually
consistent without additional MPI calls by target processes. And, if we
choose to keep the "eventual" semantic that we currently have, whether
any ordering imposed by other processes should be observed by the target
process in load/store operations, provided that process performs no
window synchronization calls.
~Jim.
On Wed, Jul 31, 2013 at 7:30 PM, Pavan Balaji <balaji at mcs.anl.gov<mailto:balaji at mcs.anl.gov>
<mailto:balaji at mcs.anl.gov<mailto:balaji at mcs.anl.gov>>> wrote:
On 07/31/2013 12:58 PM, Sur, Sayantan wrote:
On 07/31/2013 12:00 PM, Jim Dinan wrote:
I would bet that past Jim suggested striking the
polling/eventually visibile clause and relying on window
synchronization to see updates. :)
Yup, so did past, present, and future Pavan. IMO, that's a
useless
guarantee.
The downside to this is that libraries like SHMEM that
rely on
passive progress and polling, would not be implementable
on top
of Unified.
It's pretty useless even for SHMEM, since the user doesn't know
when the data is valid. You could poll on a byte for it to
turn to
one, but at that point you only know about that one byte and
nothing else.
Past Sayantan had missed this discussion, but present Sayantan does
agree that "eventually" as defined is useless. But he is also
confused by the guarantee given by MPI_Win_flush, that when the call
returns, all previously issued RMA ops are complete locally and
remotely + UNIFIED (public win == private win).
Unfortunately, that part slipped the checks of the folks who
believed you need a WIN_SYNC. So the standard is inconsistent. We
have now made a full circle and gotten back to where this email
thread started :-).
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
_________________________________________________
mpi3-rma mailing list
mpi3-rma at lists.mpi-forum.org<mailto:mpi3-rma at lists.mpi-forum.org> <mailto:mpi3-rma at lists.mpi-forum.org<mailto:mpi3-rma at lists.mpi-forum.org>>
http://lists.mpi-forum.org/__mailman/listinfo.cgi/mpi3-rma
<http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma>
_______________________________________________
mpi3-rma mailing list
mpi3-rma at lists.mpi-forum.org<mailto:mpi3-rma at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20130802/39f8aa2d/attachment-0001.html>
More information about the mpiwg-rma
mailing list