[Mpi3-rma] non-contiguous support in RMA & one-sided pack/unpack (?)

Jeff Hammond jeff.science at gmail.com
Tue Sep 15 17:19:51 CDT 2009


The arguments to MPI_RMA_xfer make no reference to datatype,
suggesting that only contiguous patches of primitive types will be
supported.  Do I understand this correctly?  I queried old emails and
cannot find an answer to this question if it already exists.  I
apologize for my inadequate search skills if I missed it.

It seems there are a few possibilities for non-contiguous support in RMA:

1. RMA is decidedly low-level and makes no attempt to support
non-contiguous data
2. RMA supports arbitrary datatypes, including derived datatypes for
non-contiguous patches
3. RMA supports non-contiguous patches via a few simple mechanisms -
strided, etc. - like ARMCI
4. RMA supports non-contiguous patches implicitly using one-sided
pack/unpack functionality, presumably implemented with active messages
5. RMA stipulated non-contiguous support but is vague enough to allow
a variety of implementations

It is not my intent to request any or all of the aforementioned
features, but merely to suggest them as possible ideas to be discussed
and adopted or eliminated based upon their relative merits and the
philosophical preferences of the principles (e.g. Vinod).

(4) seems rather challenging, but potentially desirable in certain
contexts where a large number of sub-MPI calls impedes performance.
Of course, one-sided unpack may result in very negative behavior if
implemented or used incorrectly and is perhaps too risky to consider.

One practical motivation for my thinking about this is the
non-blocking performance (rather, lack thereof) of Global Arrays on
BlueGene/P due to the need to explicitly advance the DCMF messenger
for every contiguous segment, which cannot be done asynchronously due
to the lack of thread-spawning capability.  I understand there may be
similar issues on the Cray XT5 (message injection limits in CNL?), but
I don't know enough about the technical details to elaborate.

Best,

Jeff

-- 
Jeff Hammond
Argonne Leadership Computing Facility
jhammond at mcs.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
http://home.uchicago.edu/~jhammond/



More information about the mpiwg-rma mailing list