[Mpi3-rma] MPI 3 RMA Examples needed

Stephen Poole spoole at ornl.gov
Sun Jan 31 17:44:06 CST 2010


Keith,

   Where can I read all of the proposal ? I want to make sure this does 
not have a negative impact on what needs to be done for PGAS/APGAS.

Best
Steve...

Underwood, Keith D wrote:
> This is an excellent framing of one of the things that will help the RMA group move forward; however, I do believe we need to get performance into the process.  What I do not know is how we separate performance of the interface from performance of an implementation.  Just because you are able to implement something without unbearably awkward code doesn't mean that code will perform.  How do we account for what is a semantic problem (it isn't POSSIBLE to implement this piece of code and make it go fast) and what is an implementation problem (nobody has bothered to make this piece of code fast)?  That input is just as important as what is syntactically awkward.
> 
> Keith
> 
>> -----Original Message-----
>> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>> bounces at lists.mpi-forum.org] On Behalf Of William Gropp
>> Sent: Sunday, January 31, 2010 7:17 AM
>> To: MPI 3.0 Remote Memory Access working group
>> Subject: [Mpi3-rma] MPI 3 RMA Examples needed
>>
>> Dear MPI RMA Group,
>>
>> We have several partial MPI RMA proposals.  To move forward, we need
>> to have a better understanding of the real needs by users, and we will
>> probably need to make some tough decisions about what we will support
>> and what we won't (as Marc has noted, some fairly obvious shared
>> memory operations are very tough in OpenMP, so its clear that being
>> universal isn't requred).
>>
>> What we'd like by this *Friday* are some specific examples of
>> operations that are hard to achieve in MPI RMA *and* that have a clear
>> application need.  What we *don't* want is simply "we should have a
>> better  Put", "we need active messages", or "the implementations I've
>> used are
>> too slow".  What we do want is something like the following:
>>
>> We've implemented a halo exchange with MPI-RMA, and the construction
>> of the Memory windows is awkward and limiting, particularly if the
>> domains are created dynamically, making it hard to create the memory
>> windows collectively.  We need either a method that lets us export a
>> local window or a way to allow all processes to refer to one single
>> window (something like the MPI_WIN_WORLD proposal).  Example code can
>> be found at <url here>  (or post on wiki).
>>
>> or
>>
>> We need a fetch and increment (or something similar) to implement a
>> remote lock that will allow us to make a complex series of remote
>> updates (and accesses) atomically that are needed for <specific
>> application description here>.  As shown in Using MPI-2, while a fetch
>> and increment is possible in MPI-RMA, it is extremely complex and
>> awkward.
>>
>> We'll take these examples and compare them to the current proposals
>> and the original MPI RMA in order to evaluate where we are.
>>
>> Again, please send us your concrete requirements by Friday, Feb 5th.
>> Thanks!
>>
>> Bill and Rajeev
>>
>> William Gropp
>> Deputy Director for Research
>> Institute for Advanced Computing Applications and Technologies
>> Paul and Cynthia Saylor Professor of Computer Science
>> University of Illinois Urbana-Champaign
>>
>>
>>
>>
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> 
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma

-- 

======================>

Steve Poole
Computer Science and Mathematics Division
Chief Scientist / Director of Special Programs
National Center for Computational Sciences Division (OLCF)
Chief Architect
Oak Ridge National Laboratory
865.574.9008
"Wisdom is not a product of schooling, but of the lifelong attempt to
acquire it" Albert Einstein

=====================>



More information about the mpiwg-rma mailing list