[Mpi3-rma] MPI 3 RMA Examples needed

Underwood, Keith D keith.d.underwood at intel.com
Mon Feb 1 10:27:58 CST 2010


That is certainly a good start, but I am concerned about two issues being hard to figure out:

1) IB isn't exactly "the best you can do" in terms of RMA support.
2) What application usage scenario are you going to model?

Brian and I prepped some slides for the last meeting (http://meetings.mpi-forum.org/secretary/2010/01/slides.php  and then the RMA-put-January-2010 slides) that suggest that the API is not entirely broken from a perspective of arguments passed into Put (we should debate the significant difference between possible MPI_Put rate and possible shmem_put() rate, but it isn't 10x or anything).  However, what is more concerning are questions regarding the actual usage model.  If, for example, something about the semantic model required you to call one of the synchronization operations too often, that would be relevant to know.

Keith

> -----Original Message-----
> From: Jeff Hammond [mailto:jeff.science at gmail.com]
> Sent: Sunday, January 31, 2010 7:15 PM
> To: Underwood, Keith D
> Cc: MPI 3.0 Remote Memory Access working group
> Subject: Re: [Mpi3-rma] MPI 3 RMA Examples needed
> 
> Keith,
> 
> Pavan and I are trying to separate implementation versus semantic
> shortcomings by comparing ARMCI and MVAPICH over IB, and I will try to
> also include OFED in an apples-to-apples-to-apples comparison of the
> simplest operations.  Both ARMCI and MVAPICH are heavily tuned for IB
> and so I expect the implementation quality to be fairly similar;
> hence, performance differences are probably due to semantics.
> 
> We are also working going to see how close we can get to ARMCI with
> MPI-2 RMA in an attempt to elucidate the semantic limitations of the
> MPI standard.  Obviously, ARMCI is not the end-all-be-all in one-sided
> APIs, but I cannot get GASNet to function and OpenSHMEM is vaporware
> (i.e. I cannot download it nor is it available on any machine I have
> access to) so I choose to focus exclusively on ARMCI.
> 
> Jeff
> 
> On Sun, Jan 31, 2010 at 5:47 PM, William Gropp <wgropp at illinois.edu>
> wrote:
> > I agree that performance is an important issue, and we will need to
> discuss
> > it.  However, all too many of the current implementations are
> unnecessarily
> > poor in performance, so to first order, I believe we should
> concentrate on
> > what our users need (and believe can be implemented efficiently on
> some
> > platform) rather than the experience people have had with slow and
> > unoptimized implementations.
> >
> > Bill
> >
> > On Jan 31, 2010, at 5:34 PM, Underwood, Keith D wrote:
> >
> >> This is an excellent framing of one of the things that will help the
> RMA
> >> group move forward; however, I do believe we need to get performance
> into
> >> the process.  What I do not know is how we separate performance of
> the
> >> interface from performance of an implementation.  Just because you
> are able
> >> to implement something without unbearably awkward code doesn't mean
> that
> >> code will perform.  How do we account for what is a semantic problem
> (it
> >> isn't POSSIBLE to implement this piece of code and make it go fast)
> and what
> >> is an implementation problem (nobody has bothered to make this piece
> of code
> >> fast)?  That input is just as important as what is syntactically
> awkward.
> >>
> >> Keith
> >>
> >>> -----Original Message-----
> >>> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> >>> bounces at lists.mpi-forum.org] On Behalf Of William Gropp
> >>> Sent: Sunday, January 31, 2010 7:17 AM
> >>> To: MPI 3.0 Remote Memory Access working group
> >>> Subject: [Mpi3-rma] MPI 3 RMA Examples needed
> >>>
> >>> Dear MPI RMA Group,
> >>>
> >>> We have several partial MPI RMA proposals.  To move forward, we
> need
> >>> to have a better understanding of the real needs by users, and we
> will
> >>> probably need to make some tough decisions about what we will
> support
> >>> and what we won't (as Marc has noted, some fairly obvious shared
> >>> memory operations are very tough in OpenMP, so its clear that being
> >>> universal isn't requred).
> >>>
> >>> What we'd like by this *Friday* are some specific examples of
> >>> operations that are hard to achieve in MPI RMA *and* that have a
> clear
> >>> application need.  What we *don't* want is simply "we should have a
> >>> better  Put", "we need active messages", or "the implementations
> I've
> >>> used are
> >>> too slow".  What we do want is something like the following:
> >>>
> >>> We've implemented a halo exchange with MPI-RMA, and the
> construction
> >>> of the Memory windows is awkward and limiting, particularly if the
> >>> domains are created dynamically, making it hard to create the
> memory
> >>> windows collectively.  We need either a method that lets us export
> a
> >>> local window or a way to allow all processes to refer to one single
> >>> window (something like the MPI_WIN_WORLD proposal).  Example code
> can
> >>> be found at <url here>  (or post on wiki).
> >>>
> >>> or
> >>>
> >>> We need a fetch and increment (or something similar) to implement a
> >>> remote lock that will allow us to make a complex series of remote
> >>> updates (and accesses) atomically that are needed for <specific
> >>> application description here>.  As shown in Using MPI-2, while a
> fetch
> >>> and increment is possible in MPI-RMA, it is extremely complex and
> >>> awkward.
> >>>
> >>> We'll take these examples and compare them to the current proposals
> >>> and the original MPI RMA in order to evaluate where we are.
> >>>
> >>> Again, please send us your concrete requirements by Friday, Feb
> 5th.
> >>> Thanks!
> >>>
> >>> Bill and Rajeev
> >>>
> >>> William Gropp
> >>> Deputy Director for Research
> >>> Institute for Advanced Computing Applications and Technologies
> >>> Paul and Cynthia Saylor Professor of Computer Science
> >>> University of Illinois Urbana-Champaign
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> mpi3-rma mailing list
> >>> mpi3-rma at lists.mpi-forum.org
> >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >>
> >> _______________________________________________
> >> mpi3-rma mailing list
> >> mpi3-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> > William Gropp
> > Deputy Director for Research
> > Institute for Advanced Computing Applications and Technologies
> > Paul and Cynthia Saylor Professor of Computer Science
> > University of Illinois Urbana-Champaign
> >
> >
> >
> >
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> 
> 
> 
> --
> Jeff Hammond
> Argonne Leadership Computing Facility
> jhammond at mcs.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond




More information about the mpiwg-rma mailing list