[Mpi3-rma] RMA proposal 1 update

Underwood, Keith D keith.d.underwood at intel.com
Thu May 20 09:18:19 CDT 2010


> What is available in GA itself isn't really relevant to the Forum.  We
> need the functionality that enables someone to implement GA
> ~~~efficiently~~~ on current and future platforms.  We know ARMCI is
> ~~~necessary~~~ to implement GA efficiently on some platforms, but
> Vinod and I can provide very important cases where it is ~~~not
> sufficient~~~.

Then let's enumerate those and work on a solution.

> The reason I want allfenceall is because a GA sync requires every
> process to fence all remote targets.  This is combined with a barrier,
> hence it might as well be a collective operation for everyone to fence
> all remote targets.  On BGP, implementing GA sync with fenceall from
> every node is hideous compared to what I can imagine can be done with
> active-message collectives.  I would bet a kidney it is hideous on
> Jaguar.  Vinod can sell my kidney in Singapore if I'm wrong.
> 
> The argument for allfenceall is the same as for sparse collectives.
> If there is an operation which could be done with multiple p2p calls,
> but has a collective character, it is guaranteed to be no worse to
> allow an MPI runtime to do it collectively.  I know that many
> applications will generate a sufficiently dense one-sided
> communication matrix to justify allfenceall.

So far, the argument I have heard for allflushall is:  BGP does not give remote completion information to the source.  Surely making it collective would be better. 

When I challenged that and asked for an implementation sketch, the implementation sketch provided is demonstrably worse for many scenarios than calling flushall and a barrier.  It would be a lot easier for the IBM people to do the math to show where the crossover point is, but so far, they haven't. 

> If you reject allfenceall, then I expect, and for intellectual
> consistency demand, that you vigorously protest against sparse
> collectives when they are proposed on the basis that they can
> obviously be done with p2p efficiently already.  Heck, why not also
> deprecate all MPI_Bcast etc. since some on some networks it might not
> be faster than p2p?

MPI_Bcast can ALWAYS be made faster than a naïve implementation over p2p.  That is the point of a collective.  

Ask Torsten how much flak I gave him over some of the things he has proposed for this reason.  Torsten made a rational argument for sparse collectives that they convey information that the system can use successfully for optimization.  I'm not 100% convinced, but he had to make that argument.  

> It is really annoying that you are such an obstructionist.  It is
> extremely counter-productive to the Forum and I know of no one

I am attempting to hold all things to the standards set for MPI-3:

1) you need a use case.
2) you need an implementation

Now, I tend to think that means you need an implementation that helps your use case.  In this particular case, you are asking to add collective completion to a one-sided completion model.  This is fundamentally inconsistent with the design of MPI RMA, which separates active target (collective completion) from passive target (one-sided completion).  This maps well to much of the known world of PGAS-like models:  CoArray Fortran uses collective completion and UPC uses one-sided completion (admittedly, a call to barrier will give collective completion in UPC, but that is because a barrier without completion is meaningless).  This mixture of the two models puts us at risk of always getting poor one-sided completion implementations, since there is the "out" of telling people to call the collective completion routine.  This would effectively gut the advantages of passive target.  

So far, we have proposed adding:

1) Completion independent of synchronization
2) Some key remote operations
3) an ability to operate on the full window in one epoch

In my opinion, adding collective communication to passive target is a much bigger deal.

> deriving intellectual benefit from the endless stream of protests and
> demands for OpenSHMEM-like behavior.  As the ability to implement GA
> on top of MPI-3 RMA is a stated goal of the working group, I feel no
> shame in proposing function calls which are motivated entirely by this
> purpose.

Endless stream of demands for OpenSHMEM-like behavior?  I have asked (at times vigorously) for a memory model that would support the UPC memory model.  The ability to support UPC is also in that stated goal along with implementing GA.  I have used SHMEM as an example of that memory model being done in an API and having hardware support from vendors.  I have also argued that the memory model that supports UPC would be attractive to SHMEM users and that OpenSHMEM is likely to be a competitor for mind share for RMA-like programming models.  I have lost that argument to the relatively vague "that might make performance worse in some cases".  I find that frustrating, but I don't think I have raised it since the last meeting.

Keith




More information about the mpiwg-rma mailing list