[Mpi3-rma] MPI-3 RMA comments from GA team

Jeff Hammond jeff.science at gmail.com
Wed Feb 3 06:39:30 CST 2010

I posted the example code Bruce is referring to below on the RMA Wiki.



---------- Forwarded message ----------
From: Palmer, Bruce J <Bruce.Palmer at pnl.gov>
Date: Tue, Feb 2, 2010 at 11:37 AM
Subject: MPI-3
To: "jhammond at mcs.anl.gov" <jhammond at mcs.anl.gov>


I accidentally ditched your original message about this, but I wanted
to get back to you on your request for things that are difficult to do
using a message passing model. A couple of things came to mind:

1) global counters (this would tie in to dynamic load balancing
schemes, which would also be difficult to implement in MPI)
2) I think the scatter/gather/scatter-accumulate functionality in GA
would be tremendously difficult to reproduce in MPI (this is distinct
from the MPI versions of scatter and gather). This is a fairly
complicated functionality in GA as well, but it seems like creating a
comparable capability in MPI would be extremely unpleasant. The
gather/scatter routines seem to be useful for parallel computations on
unstructured grids, e.g. updating ghost cells and other kinds of
calculations on sparse or unstructured data.
3) We created a code to do a sparse matrix-vector multiply in the
global/testing directory called sprsmatmult.F that uses the
gather/scatter routines. I don't know if the algorithm is particularly
optimal but it is another example of something that would be difficult
to do outside a one-sided context.

Of course, all of the above assumes the existence of a global address
space, but supporting a GAS is much more straightforward using
one-sided communication than it would be for message passing. In fact,
I'm not really sure how you would get a GAS to work for a message
passing model without making most of your operations collectives.


Jeff Hammond
Argonne Leadership Computing Facility
jhammond at mcs.anl.gov / (630) 252-5381

More information about the mpiwg-rma mailing list