[Mpi3-rma] MPI-3.1 consideration slides
Jeff Hammond
jhammond at alcf.anl.gov
Sat Dec 1 21:01:16 CST 2012
I propose the following additions. I will be at the Forum this week
and can discuss them in person with any interested parties. If you
want to poop on my ideas, doing so in person will probably go over
better.
(1) Define semantics of shared memory windows in separate model.
In Section 11.2.3 it says "MPI does not define semantics for accessing
shared memory windows in the separate memory model."
I would like to try to define these semantics in MPI 3.1. Shouldn't
it at least be possible to use MPI_WIN_(UN)LOCK with
MPI_LOCK_EXCLUSIVE here?
I believe that supporting RMA for weakly- or non- coherent memory
architectures is vital for future systems.
(2) Add info keys for restricted usage of RMA ops.
Add info keys to windows to restrict usage to subsets of RMA,
particular MPI_FETCH_AND_OP and/or MPI_COMPARE_AND_SWAP.
As noted on a recent discussion involving Hubert, Jim and me on this
list, this has potential performance benefits.
(3) Unify and add info keys across window types.
Unify info keys across different window creation mechanisms. As I
noted in my previous email, it is not clear if "accumulate_ordering,
accumulate_ops, same_size, alloc_shared_noncontig" are valid for
MPI_WIN_ALLOCATE and MPI_WIN_CREATE_DYNAMIC.
Furthermore, I would like the "same_size" info key to be defined for
MPI_WIN_CREATE as well since it may eliminate an unnecessary
collective (but not all of them, so I don't mind if people don't like
this suggestion).
Finally, I would like an info key for
"only_one_rank_gives_nonzero_size" (probably needs a better name)
since this means that one needs only to do a bcast and not an
allreduce or allgather during window creation/allocation. This usage
pattern is common for certain types of load-balancing counters (not
necessarily globally shared, although GA's NXTVAL is this way) and
perhaps also for some implementations of mutexes.
Thanks,
Jeff
On Sat, Dec 1, 2012 at 8:21 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
> All,
>
> Please see the attached slides as consideration items for MPI-3.1.
>
> -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
--
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
More information about the mpiwg-rma
mailing list