[Mpi3-rma] MPI 3 RMA Examples needed

Jeff Hammond jeff.science at gmail.com
Sun Jan 31 19:52:56 CST 2010


Steve,

See https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/RmaWikiPage .

I haven't seen anything in the proposed additions to standard that
interferes with PGAS.  However, a strong passive progress would burn a
thread on many architectures, so implementations would have to be
mindful of what a PGAS run-time was doing.  Of course, vendors
implementing both MPI and UPC, for example, could "do the right thing"
and ensure interoperability and non-redundant progress threads.

Best,

Jeff

On Sun, Jan 31, 2010 at 5:44 PM, Stephen Poole <spoole at ornl.gov> wrote:
> Keith,
>
>  Where can I read all of the proposal ? I want to make sure this does not
> have a negative impact on what needs to be done for PGAS/APGAS.
>
> Best
> Steve...
>
> Underwood, Keith D wrote:
>>
>> This is an excellent framing of one of the things that will help the RMA
>> group move forward; however, I do believe we need to get performance into
>> the process.  What I do not know is how we separate performance of the
>> interface from performance of an implementation.  Just because you are able
>> to implement something without unbearably awkward code doesn't mean that
>> code will perform.  How do we account for what is a semantic problem (it
>> isn't POSSIBLE to implement this piece of code and make it go fast) and what
>> is an implementation problem (nobody has bothered to make this piece of code
>> fast)?  That input is just as important as what is syntactically awkward.
>>
>> Keith
>>
>>> -----Original Message-----
>>> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>>> bounces at lists.mpi-forum.org] On Behalf Of William Gropp
>>> Sent: Sunday, January 31, 2010 7:17 AM
>>> To: MPI 3.0 Remote Memory Access working group
>>> Subject: [Mpi3-rma] MPI 3 RMA Examples needed
>>>
>>> Dear MPI RMA Group,
>>>
>>> We have several partial MPI RMA proposals.  To move forward, we need
>>> to have a better understanding of the real needs by users, and we will
>>> probably need to make some tough decisions about what we will support
>>> and what we won't (as Marc has noted, some fairly obvious shared
>>> memory operations are very tough in OpenMP, so its clear that being
>>> universal isn't requred).
>>>
>>> What we'd like by this *Friday* are some specific examples of
>>> operations that are hard to achieve in MPI RMA *and* that have a clear
>>> application need.  What we *don't* want is simply "we should have a
>>> better  Put", "we need active messages", or "the implementations I've
>>> used are
>>> too slow".  What we do want is something like the following:
>>>
>>> We've implemented a halo exchange with MPI-RMA, and the construction
>>> of the Memory windows is awkward and limiting, particularly if the
>>> domains are created dynamically, making it hard to create the memory
>>> windows collectively.  We need either a method that lets us export a
>>> local window or a way to allow all processes to refer to one single
>>> window (something like the MPI_WIN_WORLD proposal).  Example code can
>>> be found at <url here>  (or post on wiki).
>>>
>>> or
>>>
>>> We need a fetch and increment (or something similar) to implement a
>>> remote lock that will allow us to make a complex series of remote
>>> updates (and accesses) atomically that are needed for <specific
>>> application description here>.  As shown in Using MPI-2, while a fetch
>>> and increment is possible in MPI-RMA, it is extremely complex and
>>> awkward.
>>>
>>> We'll take these examples and compare them to the current proposals
>>> and the original MPI RMA in order to evaluate where we are.
>>>
>>> Again, please send us your concrete requirements by Friday, Feb 5th.
>>> Thanks!
>>>
>>> Bill and Rajeev
>>>
>>> William Gropp
>>> Deputy Director for Research
>>> Institute for Advanced Computing Applications and Technologies
>>> Paul and Cynthia Saylor Professor of Computer Science
>>> University of Illinois Urbana-Champaign
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> mpi3-rma mailing list
>>> mpi3-rma at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>>
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
> --
>
> ======================>
>
> Steve Poole
> Computer Science and Mathematics Division
> Chief Scientist / Director of Special Programs
> National Center for Computational Sciences Division (OLCF)
> Chief Architect
> Oak Ridge National Laboratory
> 865.574.9008
> "Wisdom is not a product of schooling, but of the lifelong attempt to
> acquire it" Albert Einstein
>
> =====================>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>



-- 
Jeff Hammond
Argonne Leadership Computing Facility
jhammond at mcs.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond




More information about the mpiwg-rma mailing list