[mpiwg-rma] Ticket 435 and Re: MPI_Win_allocate_shared and synchronization functions

Rolf Rabenseifner rabenseifner at hlrs.de
Fri Jul 11 12:31:37 CDT 2014


Bill,

> In general I am opposed to decreasing the performance of applications
> by mandating strong synchronization semantics.

I tried to put all known pieces together in the new ticket
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/437

The ticket does not add additional synchronization
rather than trying a proper meaning for shared memory 
windows.

If an application does not want to synchronize between 
a sending process and a receiving process, it can use
MPI_WIN_SYNC as shown in example added to MPI-3.0 errata in

https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/413

and based on the Erratum in the new ticket

https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/436

I hope we can discuss this as soon as possible.
MPI-3.1 should not be released before the meaning of shared
memory windows is not clear.
This discussion is already delayed for about 21 month because 
Hubert asked for clarification already on Sep 11, 2012 with subject
"[Mpi3-rma] MPI_Win_allocate_shared and synchronization functions".

Best regards
Rolf




----- Original Message -----
> From: "William Gropp" <wgropp at illinois.edu>
> To: "Rolf Rabenseifner" <rabenseifner at hlrs.de>
> Cc: "Bill Gropp" <wgropp at uiuc.edu>, "Rajeev Thakur" <thakur at anl.gov>, "Hubert Ritzdorf"
> <Hubert.Ritzdorf at EMEA.NEC.COM>, "Pavan Balaji" <balaji at anl.gov>, "Torsten Hoefler" <htor at inf.ethz.ch>, "MPI WG
> Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
> Sent: Wednesday, July 2, 2014 4:46:40 PM
> Subject: Re: Ticket 435 and Re: MPI_Win_allocate_shared and synchronization functions
> 
> In general I am opposed to decreasing the performance of applications
> by mandating strong synchronization semantics.
> 
> 
> I will have to study this particular instance carefully.
> 
> 
> Bill
> 
> 
> 
> 
> 
> 
> William Gropp
> Director, Parallel Computing Institute Thomas M. Siebel Chair in
> Computer Science
> 
> 
> University of Illinois Urbana-Champaign
> 
> 
> 
> 
> 
> 
> 
> On Jul 2, 2014, at 9:38 AM, Rolf Rabenseifner wrote:
> 
> 
> 
> Bill, Rajeev, and all other RMA WG members,
> 
> Hubert and Torsten discussed already in 2012 the meaning of
> the MPI one-sided synchronization routines for MPI-3.0 shared memory.
> 
> This question is still unresolved in the MPI-3.0 + errata.
> 
> Does the term "RMA Operation" include "a remote load/store
> from an origin process to the window Memory on a target"?
> 
> Or not?
> 
> The ticket #435 expects "not".
> 
> In this case, MPI_Win_fence and post-start-complete-wait
> cannot be used for synchronizing the sending process of data
> with the receiving process of data that use only
> local and remote load/stores on shared memory windows.
> 
> Ticket 435 extends the meaning of
> MPI_Win_fence and post-start-complete-wait that they
> provide sender-Receiver synchronization between processes
> that use local and remote load/stores
> on shared memory windows.
> 
> I hope that all RMA working group members, agree that
> - currently the behavior of these sync-routines for
>   shared memory remote load/stores is undefined due to
>   the undefined definition of "RMA Operation"
> - and that we need an errata that resolves this problem.
> 
> What is your opinion about the solution provided in
> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/435 ?
> 
> Best regards
> Rolf
> 
> PS: Ticket 435 is the result of a discussion of Pavan, Hubert and I
> at ISC2014.
> 
> ----- Original Message -----
> 
> 
> From: Hubert Ritzdorf
> 
> 
> Sent: Tuesday, September 11, 2012 7:26 PM
> 
> 
> To: mpi3-rma at lists.mpi-forum.org
> 
> 
> Subject: MPI_Win_allocate_shared and synchronization functions
> 
> 
> 
> 
> 
> Hi,
> 
> 
> 
> 
> 
> it's quite unclear what Page 410, Lines 17-19
> 
> 
> 
> 
> 
> A consistent view can be created in the uni?fied
> 
> 
> memory model (see Section 11.4) by utilizing the window
> 
> 
> synchronization functions (see
> 
> 
> Section 11.5)
> 
> 
> 
> 
> 
> really means. Section 11.5 doesn't mention any (load/store) access to
> 
> 
> shared memory.
> 
> 
> Thus, must
> 
> 
> 
> 
> 
> (*) RMA communication calls and RMA operations
> 
> 
>     be interpreted   as RMA communication calls (MPI_GET, MPI_PUT,
> 
> 
> ...) and
> 
> 
>                                ANY load/store access to shared
> 
> 
> window
> 
> 
> (*) put call             as put call and any store to shared memory
> 
> 
> (*) get call             as get call and any load from shared memory
> 
> 
> (*) accumulate call as accumulate call and any load or store access
> 
> 
> to shared window ?
> 
> 
> 
> 
> 
> Example: Assertion MPI_MODE_NOPRECEDE
> 
> 
> 
> 
> 
> Does
> 
> 
> 
> 
> 
> the fence does not complete any sequence of locally issued RMA calls
> 
> 
> 
> 
> 
> mean for windows created by MPI_Win_Allocate_shared ()
> 
> 
> 
> 
> 
> the fence does not complete any sequence of locally issued RMA calls
> 
> 
> or
> 
> 
> any load/store access to the window memory ?
> 
> 
> 
> 
> 
> It's not clear to me. I will be probably not clear for the standard
> 
> 
> MPI user.
> 
> 
> RMA operations are defined only MPI functions for window objects
> 
> 
> (as far as I can see).
> 
> 
> But possibly I'm totally wrong and the synchronization functions
> 
> 
> synchronize
> 
> 
> only the RMA communication calls (MPI_GET, MPI_PUT, ...).
> 
> 
> 
> 
> 
> Hubert
> 
> 
> 
> 
> 
> -----------------------------------------------------------------------------
> 
> 
> 
> 
> 
> Wednesday, September 12, 2012 11:37 AM
> 
> 
> 
> 
> 
> Hubert,
> 
> 
> 
> 
> 
> This is what I was referring to. I'm in favor of this proposal.
> 
> 
> 
> 
> 
> Torsten
> 
> 
> 
> 
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> 
> 

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-rma mailing list