[Mpi3-rma] mpi3-rma Digest, Vol 13, Issue 1

Quincey Koziol koziol at hdfgroup.org
Tue Feb 2 08:07:18 CST 2010


Hi all,
	I have two difficult problems in HDF5 that I need to solve, hopefully with some portion of new RMA-style work:

	- HDF5 files have a single global resource: the space in the file itself.  I'm willing to concede that I may not be able to track individual pieces of free space in the file when operating in parallel, but I definitely need to be able to allocate new space at the end of the file in a "thread-safe" way, without setting aside any MPI process or thread to manage the "end of file" value.  I believe that an atomic fetch-and-increment would solve this problem, as long as it could increment by an arbitrary amount (not just '1') and didn't involve any other MPI processes besides the one requesting the space in the file.

	- HDF5 files also have multiple distributed (local, perhaps is a better word?) resources that need to be operated on in atomic ways (i.e. without involving other MPI processes).  For example, when an application wants to create a new object in an HDF5 file, the following operations need to occur (or something similar):

	1*) <Allocate space for the object in the file>
	2) <Write the information for the new object to the space allocated>
	3*) <Check if an object of the requested name exists in a group in the file, and if not, modify the group's data structure(s) to include a link to the new object>

	Operations marked with a '*' need to happen atomically and without other processes being involved.  Step 3) seems analogous to a fetch-and-increment, in an abstract way, but I'm not certain how to solve it.  Would the "Remote Method Invocation" be able to take care of this sort of operation, which is essentially equivalent to inserting an object in a hash table?

	Quincey

On Jan 31, 2010, at 11:00 AM, mpi3-rma-request at lists.mpi-forum.org wrote:

> Message: 1
> Date: Sun, 31 Jan 2010 08:16:48 -0600
> From: William Gropp <wgropp at illinois.edu>
> Subject: [Mpi3-rma] MPI 3 RMA Examples needed
> To: "MPI 3.0 Remote Memory Access working group"
> 	<mpi3-rma at lists.mpi-forum.org>
> Message-ID: <A696E02D-340B-4F5E-B16E-227B13E0A844 at illinois.edu>
> Content-Type: text/plain; charset="US-ASCII"; format=flowed; delsp=yes
> 
> Dear MPI RMA Group,
> 
> We have several partial MPI RMA proposals.  To move forward, we need
> to have a better understanding of the real needs by users, and we will
> probably need to make some tough decisions about what we will support
> and what we won't (as Marc has noted, some fairly obvious shared
> memory operations are very tough in OpenMP, so its clear that being
> universal isn't requred).
> 
> What we'd like by this *Friday* are some specific examples of
> operations that are hard to achieve in MPI RMA *and* that have a clear
> application need.  What we *don't* want is simply "we should have a
> better  Put", "we need active messages", or "the implementations I've  
> used are
> too slow".  What we do want is something like the following:
> 
> We've implemented a halo exchange with MPI-RMA, and the construction
> of the Memory windows is awkward and limiting, particularly if the
> domains are created dynamically, making it hard to create the memory
> windows collectively.  We need either a method that lets us export a
> local window or a way to allow all processes to refer to one single
> window (something like the MPI_WIN_WORLD proposal).  Example code can
> be found at <url here>  (or post on wiki).
> 
> or
> 
> We need a fetch and increment (or something similar) to implement a
> remote lock that will allow us to make a complex series of remote
> updates (and accesses) atomically that are needed for <specific
> application description here>.  As shown in Using MPI-2, while a fetch
> and increment is possible in MPI-RMA, it is extremely complex and
> awkward.
> 
> We'll take these examples and compare them to the current proposals
> and the original MPI RMA in order to evaluate where we are.
> 
> Again, please send us your concrete requirements by Friday, Feb 5th.
> Thanks!
> 
> Bill and Rajeev
> 
> William Gropp
> Deputy Director for Research
> Institute for Advanced Computing Applications and Technologies
> Paul and Cynthia Saylor Professor of Computer Science
> University of Illinois Urbana-Champaign
> 
> 
> 
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> 
> 
> End of mpi3-rma Digest, Vol 13, Issue 1
> ***************************************





More information about the mpiwg-rma mailing list