[Mpi3-rma] MPI3 RMA Design Goals
tipparajuv at hotmail.com
Tue Sep 1 08:12:54 CDT 2009
Thanks, it helps to know daxpy like operations suffices for your use case. I hope to get more input in regards to what operations people are already using with Accumulate.
Regarding using target_mem for both local and remote, first let me say that the object has another purpose -- to encapsulate different address spaces (for example your target may be an accelerator of some sort and address space might not mean much on that device).
For networks that do require registration, many on-the-fly registration strategies have been proposed. In this case, the MPI keeps track of what has been registered and what has not been registered. Clearly, if the user were to register this memory (and indicate to the implementation that the memory has been registered), implementation can optimize data transfers (this has been discussed in literature too). Using an object such as origin_mem will alleviate this problem. However, requiring doing this for every operation may be an unnecessary programming burden. We can discuss the pros and cons of either in detail.
I think what you are asking for is another version of these interfaces that take memory_object+offset as inputs for both target and origin. The reason you are asking for this is that you believe implementations can guarantee zero-copy data transfers (I don't think this can be guaranteed across many systems) and enforce access restrictions you may impose on these memory objects. But first, for that, the memory object will need to encapsulate the access information as well. Correct me if I am missing your intent.
Lets discuss this further and let us also not forget networks/systems that didn't need memory to be registered for communication.
Vinod Tipparaju ^ http://ft.ornl.gov/~vinod ^ 1-865-241-1802
> Date: Mon, 31 Aug 2009 18:01:08 -0500
> From: jeff.science at gmail.com
> To: mpi3-rma at lists.mpi-forum.org
> Subject: Re: [Mpi3-rma] MPI3 RMA Design Goals
> Regarding: "We are still investigating with the input from the user
> community to get an idea on if this is necessary or a smaller sub-set
> of Reduce Operations are suﬃcient
> to be used as OP TYPE values (e.g. RMA ACC SUM, RMA ACC PROD, RMA ACC
> LXOR, etc.)."
> What is supported by ARMCI - Put, Get, Acc with multiplicative factor
> ala AXPY or min/max-like operations - is sufficient for everything I
> can imagine doing with MPI RMA, but perhaps that is because my use of
> ARMCI has limited my imagination.
> Regarding "target_mem":
> It appears that there is a special (read: restricted?) interface for
> the remote memory segment but nothing locally? Will it be possible to
> implement no-copy one-sided on all platforms if the local access is
> done not on memory encapsulated by such a special structure but a
> plain pointer ("origin_addr")? The DCMF_Put call
> <http://dcmf.anl-external.org/docs/api/dcmf/group__PUT.html>, for
> example, requires the use of DCMF_Memregion on both the source and the
> receiver, so it would appear that either the local registration is
> implicit in the "xfer" call or the operation cannot be no-copy. If
> local registration is implicit, and this process is non-trivial, then
> the latency on "xfer" will be substantial since registered malloc can
> take a while on some platforms. It will probably be easier on
> implementers to use a more restrictive form of the "xfer" operation
> which uses "target_mem"-like restricted memory regions for both the
> local and remote processes. It may also be of practical value when
> RMA operations are used in a multi-threaded context since the
> programmer could impose restrictions upon the local memory region so
> that it was accessible by only one thread (e.g. during Acc RMA)or
> accessible by all threads (read-only i.e. Put RMA) and unsafe usage
> would throw an error rather.
> I am not a computer scientist and apologize for any stupidity
> contained in this email.
> Jeff Hammond
> Argonne Leadership Computing Facility
> jhammond at mcs.anl.gov / (630) 252-5381
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mpiwg-rma