[mpiwg-rma] Bugs in RMA in current MPI standard - Re: Summary of MPI RMA WG meeting on Sept 21, 2018

Jeff Hammond jeff.science at gmail.com
Tue Sep 25 21:38:56 CDT 2018


On Sun, Sep 23, 2018 at 3:25 AM Rolf Rabenseifner via mpiwg-rma <
mpiwg-rma at lists.mpi-forum.org> wrote:

> Dear Bill,
>
> 1. MPI-3.1 after MPI_Win_UNLOCK_ALL, page 448 lines 1-4 read
>
>     Implementors may restrict the use of RMA communication that
>     is synchronized by lock calls to windows in memory allocated
>     by MPI_ALLOC_MEM (Section 8.2), MPI_WIN_ALLOCATE (Section 11.2.2),
>     or attached with MPI_WIN_ATTACH (Section 11.2.4).
>     Locks can be used portably only in such memory.
>
>    but should read
>
>     Implementors may restrict the use of RMA communication that
>     is synchronized by lock calls to windows in memory allocated
>     by MPI_ALLOC_MEM (Section 8.2), MPI_WIN_ALLOCATE (Section 11.2.2),
>       MPI_WIN_ALLOCATE_SHARED (Section 11.2.3),
>     or attached with MPI_WIN_ATTACH (Section 11.2.4).
>     Locks can be used portably only in such memory.
>
>
>
I am not certain that this change is consistent with the original intent,
but we should just remove this text, because it does not make sense
anymore.  If MPI_Win_attach is sufficient to allow locks, then
MPI_Win_create should be there, because one can implement MPI_Win_create in
terms of MPI_Win_create_dynamic+MPI_Win_attach.


> 2. In MPI-3.1 Section 11.5.5 Assertions, the wording of data
>    transfer assertions does not fit to shared memory windows.
>    I expect the fact is clear because the window portions
>    of process rank a can be written  or read by all process
>    using direct language assignments or expressions, respectively,
>    instead of only using RMA calls and direct accesses only to
>    the own window Portion.
>
>    Therefore, I would add at the end of the following items
>     - MPI_WIN_POST: MPI_MODE_NOSTORE and MPI_MODE_NOPUT
>     - MPI_WIN_FENCE: at all four assertions
>    the following sentence:
>
>      This assertion does not apply and is therefore ignored
>      in the case of a window created with MPI_WIN_ALLOCATE_SHARED.
>
>
I thought I created a ticket to make the window assertions more
consistent.  I don't know if it was migrated to GitHub or not.

Jeff


> Best regards
> Rolf
>
>
> ----- Original Message -----
> > From: "MPI WG Remote Memory Access working group" <
> mpiwg-rma at lists.mpi-forum.org>
> > To: "MPI WG Remote Memory Access working group" <
> mpiwg-rma at lists.mpi-forum.org>
> > Cc: "wgropp" <wgropp at illinois.edu>
> > Sent: Friday, September 21, 2018 3:52:07 PM
> > Subject: [mpiwg-rma] Summary of MPI RMA WG meeting on Sept 21, 2018
>
> > MPI RMA Working Group Meeting
> > September 21, 2018
> > Barcelona, Spain
> >
> > (I couldn’t find the RMA wiki - did this never get migrated?)
> >
> > Here are my notes. For those present, please let me know if I missed
> anything.
> >
> > - Interoperability of MPI shared memory with C11, C++11 language
> semantics.
> > Lead: Bill Gropp
> >
> > No document.
> >
> > Discussion. WG agreed that MPI should remain as consistent as possible
> with the
> > language standards as they go forward, noting that there are still
> limitations
> > in their descriptions of shared memory.
> >
> > - MPI Generalized atomics. Lead: Pavan Balaji
> >
> > PDF attached.
> >
> > Generally positive response to the text. A few suggestions were made:
> > For accumulate_op - allow compare-and-swap (CAS) with another op (as an
> option)
> > For which_accumulate_op - Consider a comma-separated list of operators
> > for accumulate_max_bytes. There was a discussion about whether this
> should be
> > count or bytes, and how exactly to describe what the max means in terms
> of
> > parameters to the accumulate operations.
> >
> > - Neighborhood communication in RMA. Lead: Nathan Hjelm
> >
> > No document.
> >
> > Nathan presented the basic ideas, which is the use of the topology
> attached to a
> > communicator to limit the permitted communication partners, and thus
> simplify
> > and reduce the memory requirements in the implementation.
> >
> > The WG was interested, and on a straw vote of 6-0-2, asked for a more
> detailed
> > presentation (e.g., a Powerpoint, not yet a written standard proposal),
> >
> > - Nonblocking RMA synchronization. Lead: Pavan Balaji
> >
> > No document, but this might be absorbed by TonyS's proposal.
> >
> > The WG would like a top-level conceptual discussion of this in the
> context of
> > MPI.
> >
> > - RMA Notify. Leads: Jim and Torsten
> >
> > [ https://github.com/mpi-forum/mpi-issues/issues/59 |
> > https://github.com/mpi-forum/mpi-issues/issues/59 ]
> >
> > Some discussion. Noted that the state-of-the-art has advanced since the
> last
> > discussion in detail by the working group. The WG, on a 7-0-1 straw
> vote, would
> > like an update to the possible options here. It was also noted that Bull
> is
> > experimenting with different options in different systems, and may be
> able to
> > share the results in a few months.
> >
> > One question to consider is the pros and cons of using requests as the
> > notification mechanism; it was noted that one pragmatic if not elegant
> solution
> > might be to allow both a heavy (e.g., MPI_Request) and lightweight (e.g.,
> > memory variable) notification approach.
> >
> > - MPI_IN_PLACE semantics for collectives on shared memory. Lead: Pavan
> Balaji
> >
> > PDF attached.
> >
> > This was reviewed by the WG. When asked whether there was interest in a
> document
> > with a usage example and discussion of benefits, the WG voted no in a
> straw
> > poll with 0-4-2 (and 2 not voting).
> >
> > - Relax constraints on MPI_WIN_SHARED_QUERY. Lead: Jeff Hammond
> >
> > [ https://github.com/mpi-forum/mpi-issues/issues/23 |
> > https://github.com/mpi-forum/mpi-issues/issues/23 ]
> >
> > The WG found support for this and voted 5-1-2 for a full written proposal
> >
> > - Add flush_thread synchronization calls. Lead: Nathan Hjelm
> >
> > No document.
> >
> > Nathan presented this and the WG voted 7-0-2 for a more detailed (e.g., a
> > Powerpoint) description.
> >
> > The WG would like to see the written proposals for generalized atomics
> and
> > shared_query in time for reading at the December MPI Forum meeting.
> >
> >
> > The WG adjourned after discussing these items.
> >
> > William Gropp
> > Director and Chief Scientist, NCSA
> > Thomas M. Siebel Chair in Computer Science
> > University of Illinois Urbana-Champaign
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > mpiwg-rma mailing list
> > mpiwg-rma at lists.mpi-forum.org
> > https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de .
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>


-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20180925/7b85b771/attachment.html>


More information about the mpiwg-rma mailing list