<div dir="ltr">I would bet that past Jim suggested striking the polling/eventually visibile clause and relying on window synchronization to see updates. :) The downside to this is that libraries like SHMEM that rely on passive progress and polling, would not be implementable on top of Unified.<div>
<div><br></div><div>Another question that was raised outside of this email discussion is whether we can rely on the architecture (in the absence of MPI calls) to make the results of operations visible in the order in which the origin process expected (e.g. through ordered accumulates, flushes, etc).<br>
<div><br></div><div style> ~Jim.</div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 30, 2013 at 6:45 PM, Pavan Balaji <span dir="ltr"><<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
On 07/30/2013 05:33 PM, Sur, Sayantan wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 07/30/2013 10:28 AM, Jim Dinan wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I believe this is adequately specified on pg 436, line 37. P1 will<br>
*eventually* see the new value for "a" without any additional<br>
synchronization operations, but neither the flush by P0 nor the Recv<br>
by<br>
P1 guarantee that P1 will see the new value immediately.<br>
</blockquote>
<br>
This is what I said is the disagreement in the WG. I can pull up the old email<br>
chain if needed, but I think others can too. One side was arguing that there's<br>
no such guarantee and you need to do a WIN_SYNC to see the value. The<br>
other side was arguing that the WIN_SYNC should not be needed; FLUSH +<br>
SEND on the origin should be enough.<br>
<br>
</blockquote>
<br>
Here's the old thread: <a href="http://lists.mpi-forum.org/mpi3-rma/2011/03/0533.php" target="_blank">http://lists.mpi-forum.org/<u></u>mpi3-rma/2011/03/0533.php</a><br>
<br>
Looks like the idea to call MPI_Win_sync for Unified got votes from both Bill and Torsten. Were there others who were in this camp?<br>
</blockquote>
<br></div>
IIRC, Torsten was against it. Jim and I were arguing for it (i.e., having to call WIN_SYNC in UNIFIED).<span class="HOEnZb"><font color="#888888"><br>
<br>
-- Pavan</font></span><div class="im HOEnZb"><br>
<br>
-- <br>
Pavan Balaji<br>
<a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br></div><div class="HOEnZb"><div class="h5">
______________________________<u></u>_________________<br>
mpi3-rma mailing list<br>
<a href="mailto:mpi3-rma@lists.mpi-forum.org" target="_blank">mpi3-rma@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma" target="_blank">http://lists.mpi-forum.org/<u></u>mailman/listinfo.cgi/mpi3-rma</a><br>
</div></div></blockquote></div><br></div>