[mpiwg-rma] Summary of MPI RMA WG meeting on Sept 21, 2018
wgropp at illinois.edu
Fri Sep 21 08:52:07 CDT 2018
MPI RMA Working Group Meeting
September 21, 2018
(I couldn’t find the RMA wiki - did this never get migrated?)
Here are my notes. For those present, please let me know if I missed anything.
- Interoperability of MPI shared memory with C11, C++11 language semantics. Lead: Bill Gropp
Discussion. WG agreed that MPI should remain as consistent as possible with the language standards as they go forward, noting that there are still limitations in their descriptions of shared memory.
- MPI Generalized atomics. Lead: Pavan Balaji
Generally positive response to the text. A few suggestions were made:
For accumulate_op - allow compare-and-swap (CAS) with another op (as an option)
For which_accumulate_op - Consider a comma-separated list of operators
for accumulate_max_bytes. There was a discussion about whether this should be count or bytes, and how exactly to describe what the max means in terms of parameters to the accumulate operations.
- Neighborhood communication in RMA. Lead: Nathan Hjelm
Nathan presented the basic ideas, which is the use of the topology attached to a communicator to limit the permitted communication partners, and thus simplify and reduce the memory requirements in the implementation.
The WG was interested, and on a straw vote of 6-0-2, asked for a more detailed presentation (e.g., a Powerpoint, not yet a written standard proposal),
- Nonblocking RMA synchronization. Lead: Pavan Balaji
No document, but this might be absorbed by TonyS's proposal.
The WG would like a top-level conceptual discussion of this in the context of MPI.
- RMA Notify. Leads: Jim and Torsten
Some discussion. Noted that the state-of-the-art has advanced since the last discussion in detail by the working group. The WG, on a 7-0-1 straw vote, would like an update to the possible options here. It was also noted that Bull is experimenting with different options in different systems, and may be able to share the results in a few months.
One question to consider is the pros and cons of using requests as the notification mechanism; it was noted that one pragmatic if not elegant solution might be to allow both a heavy (e.g., MPI_Request) and lightweight (e.g., memory variable) notification approach.
- MPI_IN_PLACE semantics for collectives on shared memory. Lead: Pavan Balaji
This was reviewed by the WG. When asked whether there was interest in a document with a usage example and discussion of benefits, the WG voted no in a straw poll with 0-4-2 (and 2 not voting).
- Relax constraints on MPI_WIN_SHARED_QUERY. Lead: Jeff Hammond
The WG found support for this and voted 5-1-2 for a full written proposal
- Add flush_thread synchronization calls. Lead: Nathan Hjelm
Nathan presented this and the WG voted 7-0-2 for a more detailed (e.g., a Powerpoint) description.
The WG would like to see the written proposals for generalized atomics and shared_query in time for reading at the December MPI Forum meeting.
The WG adjourned after discussing these items.
Director and Chief Scientist, NCSA
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mpiwg-rma