[mpiwg-rma] RMA Errata

Rolf Rabenseifner rabenseifner at hlrs.de
Thu Jan 29 03:12:07 CST 2015


As far as I see, the Forum went into the direction to 
keep the sentence that (MPI-3.0 page 410 lines 17-19) 
"A consistent view can be created in the unified
memory model (see Section 11.4) by utilizing the 
window synchronization functions (see Section 11.5) 
or explicitly completing outstanding store accesses
(e.g., by calling MPI_WIN_FLUSH)."

For the PSCW synchronization, this definition seems 
for me to restrictive, because it looks like that wording 
about local and remote accesses would imply a differentiation
between a local part of a shared memory window and 
remote parts of the shared memory, which is unnatural
for a shared Memory.

To overcome this unnatural definition of PSCW synchronization
(and also for the other one-sided synchronization methods)
I wrote the solution in ticket #456 with a generalized 
definition that
- has identical semantics with the existing text (cited above)
  for the case that one makes such unnatural Differentiation
  between local and remote shared memory parts,
- but additional describes the synchronization
  without this differentiation, i.e., by defining the 
  semantics of the synchronization functions
  for accesses to the same shared memory location 
  by two processes.

As far as I see, this more general definition does not
require any implementation changes in the MPI libraries
because all what is needed for the cited sentence implies
the semantics defined in ticket #456. 

By the way, using only C11 functionality means that the 
user has to program its own process to process synchronization
and all needed memory fences.
It may make a code faster but not really easier to program.
There is a high risk to make bugs or to use more fences
than necessary.
The definitions in #456 by using general access patterns 
is easy to program because the synchronization and memory
fences are combined inside of the one-sided synchronization
routines. 

Rolf
  

----- Original Message -----
> From: "William Gropp" <wgropp at illinois.edu>
> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
> Sent: Thursday, January 29, 2015 9:01:25 AM
> Subject: Re: [mpiwg-rma] RMA Errata
> 
> I believe that this is not only a pragmatic approach, but the correct one.
> You should be able to use the C11 atomic  memory features to have a
> portable shared memory code (once we have full C11 compilers).
> 
> Bill
> 
> On Jan 28, 2015, at 11:22 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
> 
> > Thanks.  I understand that this isn't an easy issue.  We are trying to
> > use MPI-3 shared memory windows internally with PSCW-like
> > synchronization and I am inclined to just give up and use x86
> > intrinsics to do what I know is sufficient for our processors and
> > compilers, just to make it easy (and fast).
> > 
> > Jeff
> > 
> > On Wed, Jan 28, 2015 at 1:43 PM, William Gropp <wgropp at illinois.edu> wrote:
> >> That’s a harder topic and I wanted to think more about it.  Here are the
> >> two issues with which I am most concerned:
> >> 
> >> 1) We are defining the interaction of MPI with stuff that happens outside of
> >> MPI, in the programming language.  Yes, we always sort of did, but this is
> >> in an area where many skilled people have made mistakes, and the likelyhood
> >> of an error is much higher.  The lack of precision in these discussions
> >> reinforces my concern.
> >> 
> >> 2) We are not considering the potential overhead that requiring other than a
> >> small set of RMA synchronization routines to guarantee some shared memory
> >> synchronization.  The sense that I get is that the Forum is unconcerned
> >> about this, which I think is a serious mistake, and one for which I have
> >> yet to see a compelling use case.
> >> 
> >> Bill
> >> 
> >> On Jan 28, 2015, at 7:46 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
> >> 
> >> Fine, but it would be helpful if you would respond to my comments
> >> about allowing point-to-point _synchronization_ on shared memory using
> >> PSCW, since those were the more germane ones anyways.
> >> 
> >> 
> >> 
> >> _______________________________________________
> >> mpiwg-rma mailing list
> >> mpiwg-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> > 
> > 
> > 
> > --
> > Jeff Hammond
> > jeff.science at gmail.com
> > http://jeffhammond.github.io/
> > _______________________________________________
> > mpiwg-rma mailing list
> > mpiwg-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
> 

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-rma mailing list