[mpiwg-rma] Bugs in RMA in current MPI standard - Re: Summary of MPI RMA WG meeting on Sept 21, 2018

William Gropp wgropp at illinois.edu
Tue Sep 25 07:10:18 CDT 2018


Ok, I looked at the instructions for creating an MPI issue, which seems to be the way to track this, but that says specifically 

> When an issue is ready to be presented to the full Forum, a representative of the working group should begin the process of presenting the issue according to the guidelines below:

(see https://github.com/mpi-forum/mpi-issues/wiki/How-to-file-an-MPI-Forum-issue-%28%22ticket%22%29 )

The issues Rolf raised below look like they are close to this criteria, but not yet there. So the next step appears to be to have a discussion in the working group to reach the point where (a) we have *all* of the fields for the issue (see below) and (b) the working group agrees on the issue.

If we still had the wiki, I’d set up a wiki page for this. :(

Everyone, it will be easier to use email to discuss topics if we follow the rule of one item per email.

Bill

Here are the fields that need to be filled in:

# Problem

Describe the motivation of your proposal here.

# Proposal

Describe the ideas of the proposal.

# Changes to the Text

Describe the text changes here.

# Impact on Implementations

Describe changes that implementations will be required to make here.

# Impact on Users

Describe the changes that will impact users here.

# References

Insert any internal (other issues) or external (websites, papers, etc.) references here.
William Gropp
Director and Chief Scientist, NCSA
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign






> On Sep 23, 2018, at 5:25 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
> 
> Dear Bill,
> 
> 1. MPI-3.1 after MPI_Win_UNLOCK_ALL, page 448 lines 1-4 read
> 
>    Implementors may restrict the use of RMA communication that 
>    is synchronized by lock calls to windows in memory allocated 
>    by MPI_ALLOC_MEM (Section 8.2), MPI_WIN_ALLOCATE (Section 11.2.2), 
>    or attached with MPI_WIN_ATTACH (Section 11.2.4). 
>    Locks can be used portably only in such memory.
> 
>   but should read
> 
>    Implementors may restrict the use of RMA communication that 
>    is synchronized by lock calls to windows in memory allocated 
>    by MPI_ALLOC_MEM (Section 8.2), MPI_WIN_ALLOCATE (Section 11.2.2), 
>      MPI_WIN_ALLOCATE_SHARED (Section 11.2.3),
>    or attached with MPI_WIN_ATTACH (Section 11.2.4). 
>    Locks can be used portably only in such memory.
> 
> 
> 2. In MPI-3.1 Section 11.5.5 Assertions, the wording of data 
>   transfer assertions does not fit to shared memory windows.
>   I exxpect the fact is clear because the window portions 
>   of process rank a can be written  or read by all process
>   unsing direct language assignments or expressions, resprectively,
>   instead of only using RMA calls and direct accesses only to
>   the own window Portion.
> 
>   Therefore, I would add at the end of the following items
>    - MPI_WIN_POST: MPI_MODE_NOSTORE and MPI_MODE_NOPUT
>    - MPI_WIN_FENCE: at all four assertions
>   the following sentence:
> 
>     This assertion does not apply and is therefore ignored
>     in the case of a window created with MPI_WIN_ALLOCATE_SHARED.
> 
> Best regards
> Rolf
> 
> 
> ----- Original Message -----
>> From: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> To: "MPI WG Remote Memory Access working group" <mpiwg-rma at lists.mpi-forum.org>
>> Cc: "wgropp" <wgropp at illinois.edu>
>> Sent: Friday, September 21, 2018 3:52:07 PM
>> Subject: [mpiwg-rma] Summary of MPI RMA WG meeting on Sept 21, 2018
> 
>> MPI RMA Working Group Meeting
>> September 21, 2018
>> Barcelona, Spain
>> 
>> (I couldn’t find the RMA wiki - did this never get migrated?)
>> 
>> Here are my notes. For those present, please let me know if I missed anything.
>> 
>> - Interoperability of MPI shared memory with C11, C++11 language semantics.
>> Lead: Bill Gropp
>> 
>> No document.
>> 
>> Discussion. WG agreed that MPI should remain as consistent as possible with the
>> language standards as they go forward, noting that there are still limitations
>> in their descriptions of shared memory.
>> 
>> - MPI Generalized atomics. Lead: Pavan Balaji
>> 
>> PDF attached.
>> 
>> Generally positive response to the text. A few suggestions were made:
>> For accumulate_op - allow compare-and-swap (CAS) with another op (as an option)
>> For which_accumulate_op - Consider a comma-separated list of operators
>> for accumulate_max_bytes. There was a discussion about whether this should be
>> count or bytes, and how exactly to describe what the max means in terms of
>> parameters to the accumulate operations.
>> 
>> - Neighborhood communication in RMA. Lead: Nathan Hjelm
>> 
>> No document.
>> 
>> Nathan presented the basic ideas, which is the use of the topology attached to a
>> communicator to limit the permitted communication partners, and thus simplify
>> and reduce the memory requirements in the implementation.
>> 
>> The WG was interested, and on a straw vote of 6-0-2, asked for a more detailed
>> presentation (e.g., a Powerpoint, not yet a written standard proposal),
>> 
>> - Nonblocking RMA synchronization. Lead: Pavan Balaji
>> 
>> No document, but this might be absorbed by TonyS's proposal.
>> 
>> The WG would like a top-level conceptual discussion of this in the context of
>> MPI.
>> 
>> - RMA Notify. Leads: Jim and Torsten
>> 
>> [ https://github.com/mpi-forum/mpi-issues/issues/59 |
>> https://github.com/mpi-forum/mpi-issues/issues/59 ]
>> 
>> Some discussion. Noted that the state-of-the-art has advanced since the last
>> discussion in detail by the working group. The WG, on a 7-0-1 straw vote, would
>> like an update to the possible options here. It was also noted that Bull is
>> experimenting with different options in different systems, and may be able to
>> share the results in a few months.
>> 
>> One question to consider is the pros and cons of using requests as the
>> notification mechanism; it was noted that one pragmatic if not elegant solution
>> might be to allow both a heavy (e.g., MPI_Request) and lightweight (e.g.,
>> memory variable) notification approach.
>> 
>> - MPI_IN_PLACE semantics for collectives on shared memory. Lead: Pavan Balaji
>> 
>> PDF attached.
>> 
>> This was reviewed by the WG. When asked whether there was interest in a document
>> with a usage example and discussion of benefits, the WG voted no in a straw
>> poll with 0-4-2 (and 2 not voting).
>> 
>> - Relax constraints on MPI_WIN_SHARED_QUERY. Lead: Jeff Hammond
>> 
>> [ https://github.com/mpi-forum/mpi-issues/issues/23 |
>> https://github.com/mpi-forum/mpi-issues/issues/23 ]
>> 
>> The WG found support for this and voted 5-1-2 for a full written proposal
>> 
>> - Add flush_thread synchronization calls. Lead: Nathan Hjelm
>> 
>> No document.
>> 
>> Nathan presented this and the WG voted 7-0-2 for a more detailed (e.g., a
>> Powerpoint) description.
>> 
>> The WG would like to see the written proposals for generalized atomics and
>> shared_query in time for reading at the December MPI Forum meeting.
>> 
>> 
>> The WG adjourned after discussing these items.
>> 
>> William Gropp
>> Director and Chief Scientist, NCSA
>> Thomas M. Siebel Chair in Computer Science
>> University of Illinois Urbana-Champaign
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de .
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
> Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20180925/a2cc3961/attachment.html>


More information about the mpiwg-rma mailing list