[Mpi3-rma] RMA proposal 2 initial text

Rajeev Thakur thakur at mcs.anl.gov
Wed Nov 3 17:40:19 CDT 2010


Thanks for putting this together. Some comments:

> 1. Communication ordering: communication from the same source to the same memory location is ordered now for all epochs (not something specific to lockfree). The user can relax it by setting the "unordered" info argument or a per-epoch MPI_MODE_UNORDERED assertion.

Making ordered the default is a big change and would need some thought and reconciling with statements in other parts of the chapter and proposal 1 (not sure where they all are).

The C prototypes of rma_put/get/accumulate need the * in MPI_RMA_req *rma_req.

The Fortran prototypes of the above are missing RMA_REQ.

In the description of the above functions where it says "returns a request handle that can be waited or tested on", it is worth adding "with MPI_RMA_Test or MPI_RMA_Wait functions" to avoid anyone thinking of regular test and wait.

Does MPI_RMA_Register_op require the same op to be passed on all processes? If so, it is worth mentioning.

Should the test/wait functions set the request to RMA_REQ_NULL as regular test/wait do? If so, they need an MPI_RMA_req *req.

Does an RMA_put need to be completed by an RMA_test/wait? What happens if the user just ends the epoch or calls a flush?

Rajeev



On Oct 27, 2010, at 9:41 PM, Pavan Balaji wrote:

> All,
> 
> I've attached a draft of proposal 2. Most of the changes that were originally planned for proposal 2 have been merged into proposal 1. I think we might be able to merge some more (or all) features into proposal 1 as well.
> 
> Here are the main changes:
> 
> 1. Communication ordering: communication from the same source to the same memory location is ordered now for all epochs (not something specific to lockfree). The user can relax it by setting the "unordered" info argument or a per-epoch MPI_MODE_UNORDERED assertion.
> 
> 2. New calls for per-operation local completions for buffer reuse. Appropriate test and wait calls are also added.
> 
> 3. Added a new MPI_WIN_LOCK_WAIT call that blocks till the lock is acquired for using shared memory regions with load/store capabilities within a window.
> 
> I haven't added the MPI_WIN_ALLFLUSHALL call yet.
> 
> -- Pavan
> 
> -- 
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> <one-side-2.pdf>_______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma





More information about the mpiwg-rma mailing list