[Mpi3-rma] Updated MPI-3 RMA proposal 1

Underwood, Keith D keith.d.underwood at intel.com
Thu Jun 17 12:01:02 CDT 2010


Bill encouraged us (and I agree) to get "everything that goes together into one proposal".  It was never intended that any proposal be a hodge podge of unrelated things.  I thought the current goal was to have a minimal proposal that was passable to broaden the applicability of RMA (unfortunately, I think an ordering definition of some sort is part of "minimal") and a second proposal for independent request tracking, since it was orthogonal to the memory model changes we were making to passive target.  If we have other things that we need to do that don't fall into one of those categories, I would think it should be another ticket.  

Keith

> -----Original Message-----
> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> bounces at lists.mpi-forum.org] On Behalf Of Pavan Balaji
> Sent: Thursday, June 17, 2010 11:53 AM
> To: MPI 3.0 Remote Memory Access working group
> Subject: Re: [Mpi3-rma] Updated MPI-3 RMA proposal 1
> 
> 
> Ok, I understand your disagreement. In the working group, the majority
> of us think that it is useful (especially to support GA, which was one
> of the goals of the WG). But, as I said, if the rest of the Forum
> totally hates it, we'll withdraw it -- it'll no longer a proposal 1 vs.
> proposal 2 issue.
> 
> For the record, I'm very much against this two proposal idea. If things
> are related and don't make sense individually, it is good to merge them
> into a single proposal. But now we are pretty much throwing in random
> things together into a single proposal (e.g., proposal 2). IMO, we
> should address these items as individual tickets, get the Forum
> feedback
> on them, and then once we know what the Forum likes, put them into a
> single written proposal.
> 
> Otherwise, this discussion is pretty much becoming like -- "I want
> items
> X + Y, so I won't let the same proposal have Z since I don't think the
> Forum will like it". It's no longer a case of what's related and what's
> not.
> 
> Rajeev/Bill: we need some changes here.
> 
>   -- Pavan
> 
> On 06/17/2010 10:39 AM, Barrett, Brian W wrote:
> > I think it's semantically ugly, doesn't fit with the spirit of the
> standard, and is unnecessary.  I understand others disagree.  I also
> don't care enough to fight about it not being proposed (in the general
> case).  However, since I believe that others in the general forum will
> have a strong reaction to it and I'd really prefer not have proposal 1
> (which is already going to be difficult to get done) tainted with
> something that is not strictly necessary to fixing obviously broken
> pieces of the existing standard and that will invoke strong reactions
> in the greater community.
> >
> > But like I said, I dont' care enough to argue something without data.
> >
> > Brian
> >
> > --
> >   Brian W. Barrett
> >   Scalable System Software Group
> >   Sandia National Laboratories
> > ________________________________________
> > From: mpi3-rma-bounces at lists.mpi-forum.org [mpi3-rma-
> bounces at lists.mpi-forum.org] On Behalf Of Pavan Balaji
> [balaji at mcs.anl.gov]
> > Sent: Thursday, June 17, 2010 8:48 AM
> > To: MPI 3.0 Remote Memory Access working group
> > Subject: Re: [Mpi3-rma] Updated MPI-3 RMA proposal 1
> >
> > I don't see "I don't like allflushall, because I don't think the
> Forum
> > will like allflushall" as a good argument.
> >
> > There was no plenary for RMA this time. Of course, we'll take this to
> > the Forum and get their reaction to each piece before deciding on
> them.
> > If the rest of the Forum is OK with allflushall, what is your
> > *individual* complaint against it?
> >
> > Btw, if we figure that the Forum doesn't like any one of the items,
> > we'll drop them before taking them for the actual voting. So, none of
> > the proposals should actually get killed by individual items.
> >
> >   -- Pavan
> >
> > On 06/17/2010 09:38 AM, Barrett, Brian W wrote:
> >> What did the greater forum think of the all_flush_all idea?  I have
> to agree with Keith - I think including the all_flush_all would result
> in death to the parts of the proposal I care about.  I don't dislike
> all_flush_all (I don't like it either), but I do think others will have
> a strong reaction to it, and if it kills the ability to fix real
> problems with the standard, it's harmful.
> >>
> >> I know this is turning the question on it's head.  We're not looking
> for the "right" thing, or the "best" thing, or the "easiest to support
> ARMCI" thing - we're looking at what we can get the standards body to
> agree to.  Otherwise, we're going to have nothing from where we are
> today, which I can hope we all agree is worse than having proposal 1
> without the all_flush_all.
> >>
> >> To me, removing the lock restriction makes sense.  I don't think it
> would overly burden the MPI implementation, and has a fairly sane use
> case...
> >>
> >> Brian
> >>
> >> --
> >>   Brian W. Barrett
> >>   Scalable System Software Group
> >>   Sandia National Laboratories
> >> ________________________________________
> >> From: mpi3-rma-bounces at lists.mpi-forum.org [mpi3-rma-
> bounces at lists.mpi-forum.org] On Behalf Of Pavan Balaji
> [balaji at mcs.anl.gov]
> >> Sent: Wednesday, June 16, 2010 8:48 PM
> >> To: MPI 3.0 Remote Memory Access working group
> >> Subject: Re: [Mpi3-rma] Updated MPI-3 RMA proposal 1
> >>
> >> Btw, there was also some discussion on whether we can use
> MPI_Win_fence
> >> to achieve the same thing, but the problem was that we couldn't use
> >> flush/flushall in the Win_fence epoch; that was only relevant for
> the
> >> lock/unlock epoch.
> >>
> >> So, basically, we wanted something like Win_fence, but could be used
> >> within the lock/unlock epoch.
> >>
> >>   -- Pavan
> >>
> >> On 06/16/2010 09:37 PM, Pavan Balaji wrote:
> >>> Keith,
> >>>
> >>> We had some discussion on the point that allflushall was not one-
> sided
> >>> like the rest of the passive target calls. But the consensus was
> that we
> >>> were really looking for functionality -- if it doesn't match the
> rest of
> >>> passive target, so be it. We can stop calling it "passive target"
> and
> >>> rename it to something else, if it helps.
> >>>
> >>>   -- Pavan
> >>>
> >>> On 06/16/2010 09:09 PM, Underwood, Keith D wrote:
> >>>>> This ship has sailed.  You're the only person who has voiced an
> objection to all_flush_all and numerous others
> >>>>> have demonstrated at length why it is necessary.  I do not find
> the obsession with pure-passive target
> >>>>> semantics to be a compelling reason to reject all_flush_all.
> >>>> All_flush_all will further dissuade good implementations of
> passive-target semantics.  To my understanding, many implementations
> don't provide compliant passive-target semantics now and the
> introduction of all_flush_all will give another out to those
> implementations.  So, yeah, call me a purist, but it isn't JUST about
> doing API design that looks reasonable and has reasonable names.
> >>>>
> >>>>>> Do we have the use cases for both of those and some attempt at
> >>>>>> quantification of the performance advantage in some
> implementation?
> >>>>> Yes.  I will not repeat the use case for all_flush_all since we
> have
> >>>>> discussed that topic far too long already.  Somewhat else should
> >>>>> present the multiple locks issue generically before I give the
> Global
> >>>>> Arrays use case.
> >>>> I have REPEATEDLY asked for a quantification of the advantage, and
> NOBODY has offered a quantitative discussion.  Lots of hand waving and
> discussion of challenges that are faced on BG, even an interesting
> implementation case given by Brian Smith, but he agreed that there
> would be a crossover where all_flush_all could be slower than flush_all
> + barrier, since there is an iteration that is involved.  And, right
> now, there is only one system I know of where there is an opportunity
> for an advantage: BG.  So, many of us CAN'T quantify the advantage.
> >>>>
> >>>> Keith
> >>>>
> >>>> _______________________________________________
> >>>> mpi3-rma mailing list
> >>>> mpi3-rma at lists.mpi-forum.org
> >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >> --
> >> Pavan Balaji
> >> http://www.mcs.anl.gov/~balaji
> >> _______________________________________________
> >> mpi3-rma mailing list
> >> mpi3-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >>
> >> _______________________________________________
> >> mpi3-rma mailing list
> >> mpi3-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> > --
> > Pavan Balaji
> > http://www.mcs.anl.gov/~balaji
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> 
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma




More information about the mpiwg-rma mailing list