[Mpi3-rma] [EXTERNAL] Re: MPI-3.1 consideration slides

Hubert Ritzdorf Hubert.Ritzdorf at EMEA.NEC.COM
Tue Dec 4 13:58:15 CST 2012


Hi Brian,

when I follow the discussion on the read and write memory barrier,
I have corresponding analogies for the SX system. When you have to
integrate corresponding MPI functions in order to guarantee
load/store sequences, I can perform clear of caches or 
ensure the end of write sequences.

The rules for the separate memory model on Page 455 of MPI 3.0
are fine for me (including the currently discussed changes).
Important is that accumulates are performed by
MPI RMA functions so that they can guarantee that the
operations are performed with the actual values
(I'm afraid that users will use non MPI functions
to perform accumulate operation on shared memory
windows; the standard is very hard to read in this section).

Hubert
________________________________________
From: mpi3-rma-bounces at lists.mpi-forum.org [mpi3-rma-bounces at lists.mpi-forum.org] on behalf of Barrett, Brian W [bwbarre at sandia.gov]
Sent: Tuesday, December 04, 2012 6:36 PM
To: MPI 3.0 Remote Memory Access working group
Subject: Re: [Mpi3-rma] [EXTERNAL] Re:  MPI-3.1 consideration slides

Hubert -

I think one of the problems is that we're not experts at non-cache
coherent systems at this level.  Can you speak about issues you would see
in extending the shared memory windows to the separate model?  I'm not
against the extension; I think we largely didn't do it in MPI-3 because we
didn't feel we had the expertise to do it right.

Brian

On 12/4/12 2:19 AM, "Hubert Ritzdorf" <Hubert.Ritzdorf at emea.nec.com> wrote:

>Hi Jim,
>
>SX vector nodes are large shared memory systems. Therefore, I assume that
>users are quite interested in shared memory windows.
>
>Hubert
>________________________________________
>From: mpi3-rma-bounces at lists.mpi-forum.org
>[mpi3-rma-bounces at lists.mpi-forum.org] on behalf of Jim Dinan
>[dinan at mcs.anl.gov]
>Sent: Monday, December 03, 2012 5:44 PM
>To: mpi3-rma at lists.mpi-forum.org
>Subject: Re: [Mpi3-rma] MPI-3.1 consideration slides
>
>Hubert,
>
>Thanks for your comment -- in general, the RMA group is thinking about
>the vector systems.  Is there a desire from users to have shared memory
>windows defined for this platform?  Keeping in mind that the
>synchronization required will be harder to reason about than get/put?
>
>  ~Jim.
>
>On 12/3/12 10:24 AM, Hubert Ritzdorf wrote:
>> NEC SX (including the Earth Simulator) memory is not cache coherent and
>> MPI-2 RMA is working since many years on these systems.
>>
>> Hubert
>> ________________________________________
>> From: mpi3-rma-bounces at lists.mpi-forum.org
>>[mpi3-rma-bounces at lists.mpi-forum.org] on behalf of Jim Dinan
>>[dinan at mcs.anl.gov]
>> Sent: Monday, December 03, 2012 4:39 PM
>> To: mpi3-rma at lists.mpi-forum.org
>> Subject: Re: [Mpi3-rma] MPI-3.1 consideration slides
>>
>> On 12/2/12 7:44 AM, Jeff Hammond wrote:
>>>>> (1) Define semantics of shared memory windows in separate model.
>>>>>
>>>>> In Section 11.2.3 it says "MPI does not define semantics for
>>>>>accessing
>>>>> shared memory windows in the separate memory model."
>>>>>
>>>>> I would like to try to define these semantics in MPI 3.1.  Shouldn't
>>>>> it at least be possible to use MPI_WIN_(UN)LOCK with
>>>>> MPI_LOCK_EXCLUSIVE here?
>>>>>
>>>>> I believe that supporting RMA for weakly- or non- coherent memory
>>>>> architectures is vital for future systems.
>>>> I agree. We had this discussion and decided to postpone it because we
>>>> had other more pressing issues. Now may be the time. A discussion
>>>>would
>>>> be good.
>>>
>>> Yes, there was no urgency.  I'm glad the standard left this issue wide
>>> open instead of defining something restrictive.
>>
>> IMHO, defining shared memory windows in the separate model is not a
>> worthy use of the Forum's time.  This will be hard to get right, and
>> AFAIK no (reasonable) system requires it.  As we've currently left it,
>> if this functionality is needed, one can define the semantics as an MPI
>> extension.  I think this is a better option that defining a new model in
>> the absence of a platform that uses it.
>>
>>    ~Jim.
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>>
>>
>>   Click
>>https://www.mailcontrol.com/sr/BMhj+WAEk+nGX2PQPOmvUpFBc4ZD8M8UQDU1HKsp9B
>>M+kyw4sOr0J9g!Ah9tEzkBYIOMBNusRzyg1M3YVHc55g==  to report this email as
>>spam.
>>
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>>
>_______________________________________________
>mpi3-rma mailing list
>mpi3-rma at lists.mpi-forum.org
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
>_______________________________________________
>mpi3-rma mailing list
>mpi3-rma at lists.mpi-forum.org
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
>


--
  Brian W. Barrett
  Scalable System Software Group
  Sandia National Laboratories





_______________________________________________
mpi3-rma mailing list
mpi3-rma at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma




More information about the mpiwg-rma mailing list