<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">We can always do errata. <div><br></div><div>Bill</div><div><br><div>
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="font-size: 12px; ">William Gropp</div><div style="font-size: 12px; ">Director, Parallel Computing Institute</div></div></div></span><span class="Apple-style-span" style="font-size: 12px; ">Thomas M. Siebel Chair in Computer Science</span><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="font-size: 12px; ">University of Illinois Urbana-Champaign</div></div><div><br></div></div></span><br class="Apple-interchange-newline"></div></span><br class="Apple-interchange-newline"></span><br class="Apple-interchange-newline">
</div>
<br><div><div>On Jun 1, 2014, at 8:51 PM, Jim Dinan wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">I tend to agree with Jeff. On some architectures different operations are requires to make my operations visible to others versus making operations performed by others visible to me.<div><br></div><div>Is this meeting the last call for errata, or is it the September meeting?</div>
<div><br></div><div> ~Jim.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, May 31, 2014 at 4:44 PM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Remote load-store cannot be treated like local load-store from a<br>
sequential consistency perspective. If a process does local<br>
load-store, it is likely that no memory barrier will be required to<br>
see a consistent view of memory. When another process does<br>
load-store, this changes dramatically.<br>
<br>
Jeff<br>
<div class="HOEnZb"><div class="h5"><br>
On Sat, May 31, 2014 at 3:31 PM, Rajeev Thakur <<a href="mailto:thakur@mcs.anl.gov">thakur@mcs.anl.gov</a>> wrote:<br>
> I think before ticket 429 (<a href="https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/429" target="_blank">https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/429</a>) is put up for a vote as errata, the RMA working group needs to decide whether remote loads/stores to shared memory windows are treated as local loads and stores or as put/get operations (for the purpose of the assert definitions). The text will be different depending on that.<br>
><br>
> If remote loads/stores to shared memory windows are considered as local loads/stores they will be covered under MPI_MODE_NOSTORE; if considered as put/get operations, they will be covered under MPI_MODE_NOPRECEDE, MPI_MODE_NOSUCCEED, and MPI_MODE_NOPUT.<br>
><br>
> Ticket 429 says they should be considered as local loads/stores.<br>
><br>
> Rajeev<br>
><br>
><br>
> On May 27, 2014, at 1:25 PM, Jim Dinan <<a href="mailto:james.dinan@gmail.com">james.dinan@gmail.com</a>> wrote:<br>
><br>
>> Hi Rolf,<br>
>><br>
>> MPI_MODE_NOSTORE applies to local updates that should be made visible to other processes following the end of the access epoch. I believe that visibility of updates made by other processes were intended to be incorporated into the NOPRECEDE/NOSUCCEED assertions. I think that Hubert's proposal may be the right approach -- that remote load/store accesses to the shared memory window should be treated as "RMA" (e.g. analogous to get/put) operations.<br>
>><br>
>> ~Jim.<br>
>><br>
>><br>
>> On Mon, May 19, 2014 at 1:16 PM, Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a>> wrote:<br>
>> Jim and RMA WG,<br>
>><br>
>> There are now two questions:<br>
>><br>
>> Jim asked:<br>
>> > Question to WG: Do we need to update the fence assertions to better<br>
>> > define interaction with local load/store accesses and remote stores?<br>
>> ><br>
>><br>
>> Rolf asked:<br>
>> > Additionally, I would recommend that we add after MPI-3.0 p451:33<br>
>> ><br>
>> > Note that in shared memory windows (allocated with<br>
>> > MPI_WIN_ALLOCATE_SHARED), there is no difference<br>
>> > between remote store accesses and local store accesses<br>
>> > to the window.<br>
>> ><br>
>> > This would help to understand that "the local window<br>
>> > was not updated by stores" does not mean "by local stores",<br>
>> > see p452:1 and p452:9.<br>
>><br>
>> For me, it is important to understand the meaning of the<br>
>> current assertions if they are used in a shared memory window.<br>
>> Therefore my proposal above as erratum to MPI-3.0.<br>
>><br>
>> In MPI-3.1 and 4.0, you may want to add additional assertions.<br>
>><br>
>> Your analysis below, will also show that mpich implements<br>
>> Post-Start-Complete-Wait synchronization in a wrong way,<br>
>> if there are no calls to RMA routines.<br>
>><br>
>> Best regards<br>
>> Rolf<br>
>><br>
>> ----- Original Message -----<br>
>> > From: "Jim Dinan" <<a href="mailto:james.dinan@gmail.com">james.dinan@gmail.com</a>><br>
>> > To: "MPI WG Remote Memory Access working group" <<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a>><br>
>> > Sent: Thursday, May 15, 2014 4:06:08 PM<br>
>> > Subject: Re: [mpiwg-rma] Problems with RMA synchronization in combination with load/store shared memory accesses<br>
>> ><br>
>> ><br>
>> ><br>
>> > Rolf,<br>
>> ><br>
>> ><br>
>> > Here is an attempt to simplify your example for discussion. Given a<br>
>> > shared memory window, shr_mem_win, with buffer, shr_mem_buf:<br>
>> ><br>
>> > MPI_Win_fence(MPI_MODE_NOSTORE | MPI_MODE_NOPUT | MPI_MODE_NOPRECEDE<br>
>> > | MPI_MODE_NOSUCCEED, shr_mem_win );<br>
>> ><br>
>> > shr_mem_buf[...] = ...;<br>
>> ><br>
>> > MPI_Win_fence(MPI_MODE_NOPUT | MPI_MODE_NOPRECEDE |<br>
>> > MPI_MODE_NOSUCCEED, shr_mem_win);<br>
>> ><br>
>> ><br>
>> > Right now, Fence assertions don't say anything special about shared<br>
>> > memory windows:<br>
>> ><br>
>> ><br>
>> > Inline image 1<br>
>> ><br>
>> ><br>
>> > NOPRECEDE/SUCCEED are defined in terms of MPI RMA function calls, and<br>
>> > do not cover load/store. Thus, Rolf's usage appears to be correct<br>
>> > per the current text. In the MPICH fence implementation,<br>
>> > src/mpid/ch3/src/ch3u_rma_sync.c:935 we have:<br>
>> ><br>
>> > if (!(assert & MPI_MODE_NOSUCCEED)) win_ptr->fence_issued = 1;<br>
>> ><br>
>> > Because of this check, we don't actually start an active target epoch<br>
>> > on the first fence in the example above. On the second fence, we<br>
>> > therefore don't perform the necessary synchronization, leading to<br>
>> > incorrect output in Rolf's example.<br>
>> ><br>
>> ><br>
>> > Question to WG: Do we need to update the fence assertions to better<br>
>> > define interaction with local load/store accesses and remote stores?<br>
>> ><br>
>> ><br>
>> > If not, then Rolf's code is correct and we need to modify the check<br>
>> > above in MPICH to something like:<br>
>> ><br>
>> ><br>
>> > if (!(assert & MPI_MODE_NOSUCCEED) || win_ptr->create_ flavor ==<br>
>> > MPI_WIN_FLAVOR_SHARED )<br>
>> > win_ptr->fence_issued = 1;<br>
>> ><br>
>> ><br>
>> > ~Jim.<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > On Tue, Apr 8, 2014 at 12:02 PM, Rolf Rabenseifner <<br>
>> > <a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a> > wrote:<br>
>> ><br>
>> ><br>
>> > Jim,<br>
>> ><br>
>> > I'm now sure, that mpich has a bug with assertions on shared memory<br>
>> > windows.<br>
>> ><br>
>> > in the example, rcv_buf_left and rcv_buf_right are the windows.<br>
>> > the only accesses to these rcv_buf_... are stores from remote<br>
>> > and fully local loads.<br>
>> > Both accesses are done in different epochs surrounded by<br>
>> > MPI_Win_fence.<br>
>> ><br>
>> > According to your interpretation (which is really okay)<br>
>> > all fences can use all possible assertions (!!!),<br>
>> > except after the remote stores, MPI_MODE_NOSTORE cannot be used.<br>
>> ><br>
>> > I updated the example and mpich is executing it wrong.<br>
>> ><br>
>> > Please check it yourself on your installation:<br>
>> > halo_1sided_store_win_alloc_shared_w-a-2-cray.c<br>
>> ><br>
>> > Without the assertions, all works:<br>
>> > halo_1sided_store_win_alloc_shared_w-a-2NO-cray.c<br>
>> ><br>
>> > Could you verify that mpich has a bug?<br>
>> ><br>
>> > Additionally, I would recommend that we add after MPI-3.0 p451:33<br>
>> ><br>
>> > Note that in shared memory windows (allocated with<br>
>> > MPI_WIN_ALLOCATE_SHARED), there is no difference<br>
>> > between remote store accesses and local store accesses<br>
>> > to the window.<br>
>> ><br>
>> > This would help to understand that "the local window<br>
>> > was not updated by stores" does not mean "by local stores",<br>
>> > see p452:1 and p452:9.<br>
>> ><br>
>> > Is it a good idea?<br>
>> ><br>
>> > Best regards<br>
>> > Rolf<br>
>> ><br>
>> ><br>
>> ><br>
>> > ----- Original Message -----<br>
>> > > From: "Jim Dinan" < <a href="mailto:james.dinan@gmail.com">james.dinan@gmail.com</a> ><br>
>> > > To: "MPI WG Remote Memory Access working group" <<br>
>> > > <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a> ><br>
>> > > Sent: Friday, March 21, 2014 8:14:22 PM<br>
>> > > Subject: Re: [mpiwg-rma] Problems with RMA synchronization in<br>
>> > > combination with load/store shared memory accesses<br>
>> > ><br>
>> > ><br>
>> > ><br>
>> ><br>
>> > > Rolf,<br>
>> > ><br>
>> > ><br>
>> > > This line is incorrect: MPI_Win_fence(MPI_MODE_NOSTORE +<br>
>> > > MPI_MODE_NOPRECEDE, win_ rcv_buf _left );<br>
>> ><br>
>> ><br>
>> > ><br>
>> > ><br>
>> > > You need to do a bitwise OR of the assertions (MPI_MODE_NOSTORE |<br>
>> > > MPI_MODE_NOPRECEDE).<br>
>> > ><br>
>> > > In halo_1sided_store_win_alloc_shared.c, you are doing stores<br>
>> > > within<br>
>> > > the epoch, so MPI_MODE_NOSTORE looks like an incorrect assertion on<br>
>> > > the closing fence.<br>
>> > ><br>
>> > > Following the Fence epoch, you are reading from the left/right recv<br>
>> > > buffers. That also needs to be done within an RMA epoch, if you<br>
>> > > are<br>
>> > > reading non-local data.<br>
>> > ><br>
>> > > ~Jim.<br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > > On Fri, Feb 21, 2014 at 6:07 AM, Rolf Rabenseifner <<br>
>> > > <a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a> > wrote:<br>
>> > ><br>
>> > ><br>
>> > > Dear member of the RMA group and especially the mpich developers,<br>
>> > ><br>
>> > > I have real problems with the new shared memory in MPI-3.0,<br>
>> > > i.e., the load/stores together with the RMA synchronization<br>
>> > > causes wrong execution results.<br>
>> > ><br>
>> > > The attached<br>
>> > > 1sided_halo_C_mpich_problems_rabenseifner.tar.gz or .zip<br>
>> > > contains<br>
>> > ><br>
>> > > - 1sided/halo_1sided_put_win_alloc.c<br>
>> > ><br>
>> > > The basis that works. It uses MPI_Put and MPI_Win_fence for<br>
>> > > duplex left/right halo communication.<br>
>> > ><br>
>> > > - 1sided/halo_1sided_store_win_alloc_shared.c<br>
>> > ><br>
>> > > This is the same, but a shared memory window is used and<br>
>> > > the MPU_Put is substituted by storing the data in the<br>
>> > > neighbors window. Same MPI_Win_fence with same assertions.<br>
>> > ><br>
>> > > This does not work, although I'm sure that my assertions are<br>
>> > > correct.<br>
>> > ><br>
>> > > Known possibilities:<br>
>> > > - I'm wrong and was not able to understand the assertions<br>
>> > > on MPI-3.0 p452:8-19.<br>
>> > > - I'm wrong because it is invalid to use the MPI_Win_fence<br>
>> > > together with the shared memory windows.<br>
>> > > - mpich has a bug.<br>
>> > > (The first two possibilities are the reason, why I use this<br>
>> > > Forum email list)<br>
>> > ><br>
>> > > - 1sided/halo_1sided_store_win_alloc_shared_w-a-cray.c<br>
>> > ><br>
>> > > This is a work-around-for Cray that works on our Cray<br>
>> > > and does not use MPI_MODE_NOPRECEDE and MPI_MODE_NOSUCCEED.<br>
>> > > It also runs on another mpich installation.<br>
>> > ><br>
>> > > - 1sided/halo_1sided_store_win_alloc_shared_pscw.c<br>
>> > ><br>
>> > > Here, MPI_Win_fence is substituted by Post-Start-Complete-Wait<br>
>> > > and it does not work for any assertions.<br>
>> > ><br>
>> > > Same possibilities as above.<br>
>> > ><br>
>> > > - 1sided/halo_1sided_store_win_alloc_shared_query.c<br>
>> > > - 1sided/halo_1sided_store_win_alloc_shared_query_w-a-cray.c<br>
>> > ><br>
>> > > Same as halo_1sided_store_win_alloc_shared.c<br>
>> > > but non-contigues windows are used.<br>
>> > > Same problems as above.<br>
>> > ><br>
>> > > - 1sided/halo_1sided_store_win_alloc_shared_othersync.c<br>
>> > ><br>
>> > > This version uses the synchronization according to<br>
>> > > #413 and it is tested and works on two platforms.<br>
>> > ><br>
>> > > Best regards<br>
>> > > Rolf<br>
>> > ><br>
>> > > --<br>
>> > > Dr. Rolf Rabenseifner . . . . . . . . . .. email<br>
>> > > <a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a><br>
>> > > High Performance Computing Center (HLRS) . phone<br>
>> > > <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530">++49(0)711/685-65530</a><br>
>> > > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /<br>
>> > > 685-65832<br>
>> > > Head of Dpmt Parallel Computing . . .<br>
>> > > <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
>> > > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room<br>
>> > > 1.307)<br>
>> > ><br>
>> > > _______________________________________________<br>
>> > > mpiwg-rma mailing list<br>
>> > > <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>> > ><br>
>> > ><br>
>> > > _______________________________________________<br>
>> > > mpiwg-rma mailing list<br>
>> > > <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> > > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>> ><br>
>> > --<br>
>> > Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a><br>
>> > High Performance Computing Center (HLRS) . phone <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530">++49(0)711/685-65530</a><br>
>> > University of Stuttgart . . . . . . . . .. fax <a href="tel:%2B%2B49%280%29711%20%2F%20685-65832" value="+4971168565832">++49(0)711 / 685-65832</a><br>
>> > Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
>> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)<br>
>> ><br>
>> > _______________________________________________<br>
>> > mpiwg-rma mailing list<br>
>> > <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > mpiwg-rma mailing list<br>
>> > <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>><br>
>> --<br>
>> Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a><br>
>> High Performance Computing Center (HLRS) . phone <a href="tel:%2B%2B49%280%29711%2F685-65530" value="+4971168565530">++49(0)711/685-65530</a><br>
>> University of Stuttgart . . . . . . . . .. fax <a href="tel:%2B%2B49%280%29711%20%2F%20685-65832" value="+4971168565832">++49(0)711 / 685-65832</a><br>
>> Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" target="_blank">www.hlrs.de/people/rabenseifner</a><br>
>> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)<br>
>> _______________________________________________<br>
>> mpiwg-rma mailing list<br>
>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
>><br>
>> _______________________________________________<br>
>> mpiwg-rma mailing list<br>
>> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
>> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
><br>
> _______________________________________________<br>
> mpiwg-rma mailing list<br>
> <a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Jeff Hammond<br>
<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
<a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a><br>
</font></span><div class="HOEnZb"><div class="h5">_______________________________________________<br>
mpiwg-rma mailing list<br>
<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</a><br>
</div></div></blockquote></div><br></div>
_______________________________________________<br>mpiwg-rma mailing list<br><a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a><br>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma</blockquote></div><br></div></body></html>