[mpiwg-rma] MPI RMA status summary
Jeff Hammond
jeff.science at gmail.com
Mon Sep 29 15:49:51 CDT 2014
On Mon, Sep 29, 2014 at 9:16 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
> Only about the issues on #456 (shared memory syncronization):
>
>> For the ones requiring discussion, assign someone to organize a
>> position and discussion. We can schedule telecons to go over those
>> issues. The first item in the list is certainly in this class.
>
> Who can organize telecons on #456.
> Would it be possible to organize a RMA meeting at SC?
I will be there Monday through part of Thursday but am usually
triple-booked from 8 AM to midnight.
> The position expressed by the solution #456 is based on the idea
> that the MPI RMA synchronization routines should have the same
> outcome when RMA PUT and GET calls are substituted by stores and loads.
>
> The outcome for the flush routines is still not defined.
It is interesting, because the standard is actually conflicting on
whether Flush affects load-store. I find this incredibly frustrating.
Page 450:
"Locally completes at the origin all outstanding RMA operations
initiated by the calling process to the target process specified by
rank on the specified window. For example, after this routine
completes, the user may reuse any buffers provided to put, get, or
accumulate operations."
I do not not think "RMA operations" includes load-store.
Page 410:
"The consistency of load/store accesses from/to the shared memory as
observed by the user program depends on the architecture. A consistent
view can be created in the unified memory model (see Section 11.4) by
utilizing the window synchronization functions (see Section 11.5) or
explicitly completing outstanding store accesses (e.g., by calling
MPI_WIN_FLUSH)."
Here it is unambiguously implied that MPI_WIN_FLUSH affects load-stores.
My preference is to fix the statement on 410 since it is less
canonical than the one on 450, and because I do not want to have a
memory barrier in every call to WIN_FLUSH.
Jeff
> I would prefere to have an organizer of the discussion inside of
> the RMA subgroup that proposed the changes for MPI-3.1
> rather that I'm the organizer.
> I tried to bring all the input together and hope that #456
> is now state that it is consistent itsself and with the
> expectations expressed by the group that published the
> paper at EuroMPI on first usage of this shared memory interface.
>
> The ticket is (together with the help of recent C11 standadization)
> on a good way to be also consistent with the compiler optimizations -
> in other words - the C standardization body has learnt from the
> pthreads problems. Fortran is still an open question to me,
> i.e., I do not know the status, see
> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/456#comment:13
>
> Best regards
> Rolf
>
>
>
> ----- Original Message -----
>> From: "William Gropp" <wgropp at illinois.edu>
>> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> Sent: Thursday, September 25, 2014 4:19:14 PM
>> Subject: [mpiwg-rma] MPI RMA status summary
>>
>> I looked through all of the tickets and wrote a summary of the open
>> issues, which I’ve attached. I propose the following:
>>
>> Determine which of these issues can be resolved by email. A
>> significant number can probably be closed with no further action.
>>
>> For those requiring rework, determine if there is still interest in
>> them, and if not, close them as well.
>>
>> For the ones requiring discussion, assign someone to organize a
>> position and discussion. We can schedule telecons to go over those
>> issues. The first item in the list is certainly in this class.
>>
>> Comments?
>>
>> Bill
>>
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-rma
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
More information about the mpiwg-rma
mailing list