[Mpi3-rma] mpi3-rma post from bradc at cray.com requires approval

Underwood, Keith D keith.d.underwood at intel.com
Sat Jun 5 14:40:49 CDT 2010

I was only giving an example of how tightly ordering COULD be defined.  Ordering options include:

1) Ordering within a given replace:  is the first byte guaranteed to get there before the last?
2) Ordering between replaces to a given location:  but, what if two replaces are overlapping?
3) Ordering among all replaces to a given node

Two sided gives you something weird, in that it orders the matching of the message headers and not the end of messages or data within the messages.


> -----Original Message-----
> From: Pavan Balaji [mailto:balaji at mcs.anl.gov]
> Sent: Saturday, June 05, 2010 3:30 PM
> To: Underwood, Keith D
> Cc: MPI 3.0 Remote Memory Access working group; bradc at cray.com
> Subject: Re: [Mpi3-rma] mpi3-rma post from bradc at cray.com requires
> approval
> I see. My definition of ordering was a little bit different from yours.
> My definition was -- if I do two accumulates with replace on the same
> location, I'm guaranteed to have the second value in the location. It
> didn't have any definition of ordering to two different locations.
> So, I think we need to come to a consensus first on what the actual
> definition of ordering is.
>   -- Pavan
> On 06/05/2010 02:22 PM, Underwood, Keith D wrote:
> >>> We would need to think about whether we have to have the whole
> >>> message ordered or ordered on a per target address basis.
> >> Atomicity and ordering go hand-in-hand; if there's no atomicity,
> >> ordering doesn't make sense. Since we have basic datatype atomicity
> for
> >> accumulate/get_accumulate, ordering would make sense at that
> >> granularity
> >> as well.
> >>
> >> If someone wants to propose full-message atomicity, then we can
> >> consider
> >> ordering at that granularity too. But till then, whole message
> ordering
> >> is an overkill.
> >
> > Well, they aren't orthogonal, but they aren't quite that tightly
> linked.  A user that knew that two messages were not going to overlap
> might want to use a full message ordering from a single node for
> completion detection.  E.g. an MPI_Accumulate() with "replace" to one
> buffer and then an MPI_Accumulate() to another buffer to increment a
> variable and use the full message ordering to be able to use the latter
> for completion without the expense of a flush() between the messages.
> So, it has value and a usage scenario.  I just don't know if we want to
> go that far or not.
> >
> > Keith
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji

More information about the mpiwg-rma mailing list