[mpiwg-rma] Problems with RMA synchronization in combination with load/store shared memory accesses

William Gropp wgropp at illinois.edu
Sun Jun 1 14:07:16 CDT 2014


Errata and changes to the documents are separate.  Errata take effect when passed.

Bill

William Gropp
Director, Parallel Computing Institute
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign





On Jun 1, 2014, at 9:06 PM, Jeff Hammond wrote:

> For MPI-3.1?
> 
> On Sunday, June 1, 2014, William Gropp <wgropp at illinois.edu> wrote:
> We can always do errata.  
> 
> Bill
> 
> William Gropp
> Director, Parallel Computing Institute
> Thomas M. Siebel Chair in Computer Science
> University of Illinois Urbana-Champaign
> 
> 
> 
> 
> 
> On Jun 1, 2014, at 8:51 PM, Jim Dinan wrote:
> 
>> I tend to agree with Jeff.  On some architectures different operations are requires to make my operations visible to others versus making operations performed by others visible to me.
>> 
>> Is this meeting the last call for errata, or is it the September meeting?
>> 
>>  ~Jim.
>> 
>> 
>> On Sat, May 31, 2014 at 4:44 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>> Remote load-store cannot be treated like local load-store from a
>> sequential consistency perspective.  If a process does local
>> load-store, it is likely that no memory barrier will be required to
>> see a consistent view of memory.  When another process does
>> load-store, this changes dramatically.
>> 
>> Jeff
>> 
>> On Sat, May 31, 2014 at 3:31 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>> > I think before ticket 429 (https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/429) is put up for a vote as errata, the RMA working group needs to decide whether remote loads/stores to shared memory windows are treated as local loads and stores or as put/get operations (for the purpose of the assert definitions). The text will be different depending on that.
>> >
>> > If remote loads/stores to shared memory windows are considered as local loads/stores they will be covered under MPI_MODE_NOSTORE; if considered as put/get operations, they will be covered under MPI_MODE_NOPRECEDE, MPI_MODE_NOSUCCEED, and MPI_MODE_NOPUT.
>> >
>> > Ticket 429 says they should be considered as local loads/stores.
>> >
>> > Rajeev
>> >
>> >
>> > On May 27, 2014, at 1:25 PM, Jim Dinan <james.dinan at gmail.com> wrote:
>> >
>> >> Hi Rolf,
>> >>
>> >> MPI_MODE_NOSTORE applies to local updates that should be made visible to other processes following the end of the access epoch.  I believe that visibility of updates made by other processes were intended to be incorporated into the NOPRECEDE/NOSUCCEED assertions.  I think that Hubert's proposal may be the right approach -- that remote load/store accesses to the shared memory window should be treated as "RMA" (e.g. analogous to get/put) operations.
>> >>
>> >>  ~Jim.
>> >>
>> >>
>> >> On Mon, May 19, 2014 at 1:16 PM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
>> >> Jim and RMA WG,
>> >>
>> >> There are now two questions:
>> >>
>> >> Jim asked:
>> >> > Question to WG: Do we need to update the fence assertions to better
>> >> > define interaction with local load/store accesses and remote stores?
>> >> >
>> >>
>> >> Rolf asked:
>> >> > Additionally, I would recommend that we add after MPI-3.0 p451:33
>> >> >
>> >> >   Note that in shared memory windows (allocated with
>> >> >   MPI_WIN_ALLOCATE_SHARED), there is no difference
>> >> >   between remote store accesses and local store accesses
>> >> >   to the window.
>> >> >
>> >> > This would help to understand that "the local window
>> >> > was not updated by stores" does not mean "by local stores",
>> >> > see p452:1 and p452:9.
>> >>
>> >> For me, it is important to understand the meaning of the
>> >> current assertions if they are used in a shared memory window.
>> >> Therefore my proposal above as erratum to MPI-3.0.
>> >>
>> >> In MPI-3.1 and 4.0, you may want to add additional assertions.
>> >>
>> >> Your analysis below, will also show that mpich implements
>> >> Post-Start-Complete-Wait synchronization in a wrong way,
>> >> if there are no calls to RMA routines.
>> >>
>> >> Best regards
>> >> Rolf
>> >>
>> >> ----- Original Message -----
>> >> > From: "Jim Dinan" <james.dinan at gmail.com>
>> >> > To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> >
> 
> 
> -- 
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20140601/04c6a00c/attachment-0001.html>


More information about the mpiwg-rma mailing list