[mpiwg-rma] Short question on the ccNUMA memory reality
maikpeterson at googlemail.com
Thu Aug 7 08:14:00 CDT 2014
i really wonder how a team can work on rma technologies without
understanding the basics of todays computer systems ?
2014-08-05 20:01 GMT+02:00 Dave Goodell (dgoodell) <dgoodell at cisco.com>:
> On Aug 5, 2014, at 12:37 PM, Rolf Rabenseifner <rabenseifner at hlrs.de>
> > Dave,
> > thank you for this helpful answer.
> > My question was related to
> >> The sentence p436:45-46
> >> "The order in which data is written is not
> >> specified unless further synchronization is used."
> > Your citation helps, because it tells the usual expectation:
> >> 1. Each CPU will always perceive its own memory accesses as occurring
> >> in program order.
> > But in the MPI Standard nothing should be expected as usual.
> Right, since the MPI Standard is at least one more level removed from the
> architecture. The behavior of an MPI program is also dependent on the
> behavior of the C or Fortran programming language implementation, which
> isn't addressed at all by the McKenney model.
> I think Bronis made an important, valid point earlier:
> > Simply put, shared memory programming without compiler
> > assistance is not something ordinary programmers should do.
> > The text p436:43-48 may be modified into
> > Advice to users.
> > If accesses in the RMA unified model are not synchronized (with
> > locks or flushes, see Section 11.5.3), load and store operations might
> observe changes
> > to the memory while they are in progress. The order in which data is
> written is not
> > specified unless further synchronization is used. This might lead to
> inconsistent views
> > on memory and programs that assume that a transfer is complete by only
> > parts of the message are erroneous.
> > NEW: The only consistent view is that each process will always
> > perceive its own memory accesses as occurring in program order.
> > (End of advice to users.)
> > But one can argue that this is already expected by everyone.
> > In my opinion, it is better to expect on MPI shared Memory
> > windows only semantics that is written black on White
> > in the MPI Standard.
> I'll defer to the more active members of the RMA working group on this
> proposed modification. In general, I'm really not fond of the entire MPI
> RMA shared memory feature. I think it is something that should have been
> left to a library outside of MPI, and that it's impossible to clearly
> specify shared memory programming behavior in the MPI Forum's vernacular.
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mpiwg-rma