[mpiwg-rma] [EXTERNAL] Re: Synchronization on shared memory windows

Rolf Rabenseifner rabenseifner at hlrs.de
Tue Feb 4 12:01:19 CST 2014


The MPI_WIN_SYNC (not the Fortran MPI_F_SYNC_REG)
has no meaning in the unified memory model if all accesses 
are done without RMA routines.
It has only a meaning if different public and privat copy is 
there (MPI-3.0 p450:46-p451:2).
MPI-3.0 p456:3 - p457:7 define the rules for the unified memory model
but there is no need to use MPI_WIN_SYNC.
The combination of X=13 and MPI_F_SYNC_REG(X) 
before MPI_Barrier should guarantee that all bytes of X are 
stored in memory. The same should be valid in C,
because the C compiler has no chance to see whether 
MPI_Barrier will access the bytes of X or not.
And if it is guaranteed to be in the unified memory,
then the other process (B) should be able to correctly
read the data after the return from its barrier.

What is wrong with my thinking?
Which detail do I miss?

Best regards
Rolf

----- Original Message -----
> From: "Jeff Hammond" <jeff.science at gmail.com>
> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
> Cc: "Stefan Andersson" <stefan at cray.com>, "Bill Long" <longb at cray.com>
> Sent: Tuesday, February 4, 2014 6:42:52 PM
> Subject: Re: [mpiwg-rma] [EXTERNAL] Re: Synchronization on shared memory	windows
> 
> On Tue, Feb 4, 2014 at 11:39 AM, Rolf Rabenseifner
> <rabenseifner at hlrs.de> wrote:
> > Brian and all,
> >
> > No wording in the MPI-3.0 tells anything about the compiler
> > problems.
> > They are addressed by MPI_F_SYNC_REG (or work around with
> > MPI_Get_address)
> > to be sure that the store instruction is issued before I do any
> > synchronization calls.
> 
> See MPI-3 Section 11.7.4.  This is not new text.  It's been there for
> years.
> 
> > If I must do an additional MPI_WIN_SYNC, when must it be done?
> 
> Whenever you might have a race condition due to two threads or
> processes accessing the same memory via:
> - RMA + load/store
> - load/store on shared memory windows
> - RMA + non-RMA
> 
> This may not be an inclusive list.
> 
> Jeff
> 
> > My example is simple:
> > X is part of a shared memory window and should mean the same
> > memory location in both processes
> >
> > Process A         Process B
> >
> > x=13
> > MPI_F_SYNC_REG(X)
> > MPI_Barrier       MPI_Barrier
> >                   MPI_F_SYNC_REG(X)
> >                   print X
> >
> > Where exactly do I need in which process an additional
> > MPI_WIN_SYNC?
> > Which wording in the MPI-3.0 does tell this need?
> >
> > Best regards
> > Rolf
> >
> >
> >
> > ----- Original Message -----
> >> From: "Brian W Barrett" <bwbarre at sandia.gov>
> >> To: "MPI WG Remote Memory Access working group"
> >> <mpiwg-rma at lists.mpi-forum.org>
> >> Cc: "Stefan Andersson" <stefan at cray.com>, "Bill Long"
> >> <longb at cray.com>
> >> Sent: Tuesday, February 4, 2014 6:06:02 PM
> >> Subject: Re: [mpiwg-rma] [EXTERNAL] Re: Synchronization on shared
> >> memory windows
> >>
> >> On 2/4/14 9:59 AM, "Rolf Rabenseifner" <rabenseifner at hlrs.de>
> >> wrote:
> >>
> >> >Jeff,
> >> >
> >> >thank you for the MPI_FREE_MEM hint. Yes, I'll fix it in my
> >> >examples.
> >> >
> >> >About the synchronization problems:
> >> >If I use shared memory windows and direct remote load and store
> >> >instead of the RMA functions PUT or GET,
> >> >is it then correct, when I never use MPI_WIN-synchronization
> >> >routines?
> >> >
> >> >I would expect this, because in the unified RMA modell,
> >> >the load and store accesses to the neighbors memory are done
> >> >directly and MPI is not involved.
> >> >Because of the unified RMA modell (must be for shared memory
> >> >windows),
> >> >there should be no need for cache flush routines.
> >> >Correct?
> >>
> >> No.  Maybe.  Sometimes.
> >>
> >> Basically, unified is saying that the memory has the same
> >> eventually
> >> completeness that users have come to expect with shared memory
> >> programming.  But what users have come to expect depends on the
> >> architecture and the programming language.  So in some processor /
> >> language / compiler combinations, you might not need any
> >> synchronization.
> >> In other combinations, you might need compiler or processor memory
> >> barriers in your code.  MPI guarantees, however, that MPI_WIN_SYNC
> >> will
> >> implement whatever compiler / processor memory barriers you need,
> >> so
> >> it
> >> can be used as the portable hammer in your toolchest of shared
> >> memory
> >> programming.
> >>
> >> If you're only using load/stores in shared memory windows, you
> >> should
> >> not
> >> need the other synchronization routines (FLUSH, etc.).
> >>
> >> Brian
> >>
> >>
> >> --
> >>   Brian W. Barrett
> >>   Scalable System Software Group
> >>   Sandia National Laboratories
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> mpiwg-rma mailing list
> >> mpiwg-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> >>
> >
> > --
> > Dr. Rolf Rabenseifner . . . . . . . . . .. email
> > rabenseifner at hlrs.de
> > High Performance Computing Center (HLRS) . phone
> > ++49(0)711/685-65530
> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 /
> > 685-65832
> > Head of Dpmt Parallel Computing . . .
> > www.hlrs.de/people/rabenseifner
> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room
> > 1.307)
> > _______________________________________________
> > mpiwg-rma mailing list
> > mpiwg-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 
> 
> 
> --
> Jeff Hammond
> jeff.science at gmail.com
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-rma mailing list