[Mpi3-rma] FW: draft of a proposal for RMA interfaces

Jeff Hammond jeff.science at gmail.com
Mon Dec 8 13:55:16 CST 2008


I apologize in advance for potential noobiness...

The GA user manual has an extensive discussion of the coupling between
GA, ARMCI and MA
(http://www.emsl.pnl.gov/docs/global/um/init.html#ma).  If MPI3
one-sided communication replaces ARMCI as the messaging layer of GA,
what happens at the interface between these libraries?  In particular,
in what way would the sentence "shared memory is used to store global
array data and is allocated by the Global Arrays run-time system
called ARMCI" change if "ARMCI" changes to "MPI3"?

I imagine that it is not within the scope of MPI to start explicitly
managing local memory but does that mean that if I happen to get
inside of GA, I have to manually do everything that ARMCI_Malloc did
previously?

Will it be possible for RMA to distinguish between segments which are
locally allocated as shared versus private?  Presumably, it is much
harder to make RMA on shared segments thread safe and where one is
RMA-ing only between private segments then performance is increased in
the context of multithreading if the more rigorous locking procedure
used for the former is turned off.

Thanks,

Jeff

On Mon, Dec 8, 2008 at 1:12 PM, Vinod tipparaju <tipparajuv at hotmail.com> wrote:
>
>
> As I am preparing the faq Rajeev suggested, please send me any more
> questions you want included.
> Thanks,
> Vinod.
>
> ________________________________
> From: thakur at mcs.anl.gov
> To: mpi3-rma at lists.mpi-forum.org
> Date: Mon, 8 Dec 2008 12:04:14 -0600
> Subject: [Mpi3-rma] FW: draft of a proposal for RMA interfaces
>
> Just resending this as a reminder. It would be good to have an FAQ that
> answers commonly asked questions.
>
> Rajeev
> ________________________________
> From: Rajeev Thakur [mailto:thakur at mcs.anl.gov]
> Sent: Sunday, October 19, 2008 6:43 PM
> To: 'MPI 3.0 Remote Memory Access working group'
> Subject: RE: [Mpi3-rma] draft of a proposal for RMA interfaces
>
> I think it would be good to add an FAQ section at the end, containing
> answers to questions that will be asked of any RMA proposal, such as
> non-cache-coherent, does it meet the needs of PGAS/Global Arrays, support
> for heterogeneous, how does the target know of completion, how does it
> interplay with existing RMA spec, etc. It will make sure that the proposal
> addresses those issues, that we ourselves are clear of the answers, and that
> they are not repeatedly raised at each meeting.
> Rajeev
>
>> Richard Graham wrote:
>>
>>> Just to get discussion going again. Talking with several folks I have
>>> heard several concerns expressed about the proposal. I think it would
>>> be good if these (and others) could be raised on the list, so we can
>>> start discussion. We can continue this next week in Chicago, but
>>> Vinod will not be able to make this meeting, so an e-mail discussion
>>> will help.
>>>
>>> Here are the issues I have hear of so far:
>>> - May not work well on current h/w that is not cache coherent, as it
>>> requires a remote thread in this case. I believe this is for the SX
>>> series of machines, but Jesper please correct me if I am wrong here.
>>> What would be an alternative approach that could provide expected
>>> performance on platforms that may require work on the remote end for
>>> RMA for correctness, and work well on platforms that do require very
>>> specific remote cache management (or other actions) for correctness ?
>>> - Concern about future high-end platforms, under that assumption that
>>> these will not be cache coherent (and will actually have caches – if
>>> they don't this is not a concern), and therefore this proposal is
>>> aimed at a short-lived technical capability.
>>> - What is missing ?
>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
>



-- 
Jeff Hammond
The University of Chicago
http://home.uchicago.edu/~jhammond/




More information about the mpiwg-rma mailing list