<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Ok, I looked at the instructions for creating an MPI issue, which seems to be the way to track this, but that says specifically <div class=""><br class=""></div><div class=""><blockquote type="cite" class="">When an issue is ready to be presented to the full Forum, a
representative of the working group should begin the process of
presenting the issue according to the guidelines below:</blockquote><div class=""><br class=""></div>(see <a href="https://github.com/mpi-forum/mpi-issues/wiki/How-to-file-an-MPI-Forum-issue-%28%22ticket%22%29" class="">https://github.com/mpi-forum/mpi-issues/wiki/How-to-file-an-MPI-Forum-issue-%28%22ticket%22%29</a> )</div><div class=""><br class=""></div><div class="">The issues Rolf raised below look like they are close to this criteria, but not yet there. So the next step appears to be to have a discussion in the working group to reach the point where (a) we have *all* of the fields for the issue (see below) and (b) the working group agrees on the issue.</div><div class=""><br class=""></div><div class="">If we still had the wiki, I’d set up a wiki page for this. :(</div><div class=""><br class=""></div><div class="">Everyone, it will be easier to use email to discuss topics if we follow the rule of one item per email.</div><div class=""><br class=""></div><div class="">Bill</div><div class=""><br class=""></div><div class="">Here are the fields that need to be filled in:</div><div class=""><br class=""></div><div class=""><div class=""># Problem</div><div class=""><br class=""></div><div class="">Describe the motivation of your proposal here.</div><div class=""><br class=""></div><div class=""># Proposal</div><div class=""><br class=""></div><div class="">Describe the ideas of the proposal.</div><div class=""><br class=""></div><div class=""># Changes to the Text</div><div class=""><br class=""></div><div class="">Describe the text changes here.</div><div class=""><br class=""></div><div class=""># Impact on Implementations</div><div class=""><br class=""></div><div class="">Describe changes that implementations will be required to make here.</div><div class=""><br class=""></div><div class=""># Impact on Users</div><div class=""><br class=""></div><div class="">Describe the changes that will impact users here.</div><div class=""><br class=""></div><div class=""># References</div><div class=""><br class=""></div><div class="">Insert any internal (other issues) or external (websites, papers, etc.) references here.</div><div class="">
<div style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;">William Gropp<br class="">Director and Chief Scientist, NCSA<br class="">Thomas M. Siebel Chair in Computer Science<br class="">University of Illinois Urbana-Champaign</div><br class="Apple-interchange-newline"></div></div><br class="Apple-interchange-newline"></div><br class="Apple-interchange-newline"></div><br class="Apple-interchange-newline"><br class="Apple-interchange-newline">
</div>
<br class=""><div><blockquote type="cite" class=""><div class="">On Sep 23, 2018, at 5:25 AM, Rolf Rabenseifner <<a href="mailto:rabenseifner@hlrs.de" class="">rabenseifner@hlrs.de</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Dear Bill,<br class=""><br class="">1. MPI-3.1 after MPI_Win_UNLOCK_ALL, page 448 lines 1-4 read<br class=""><br class=""> Implementors may restrict the use of RMA communication that <br class=""> is synchronized by lock calls to windows in memory allocated <br class=""> by MPI_ALLOC_MEM (Section 8.2), MPI_WIN_ALLOCATE (Section 11.2.2), <br class=""> or attached with MPI_WIN_ATTACH (Section 11.2.4). <br class=""> Locks can be used portably only in such memory.<br class=""><br class=""> but should read<br class=""><br class=""> Implementors may restrict the use of RMA communication that <br class=""> is synchronized by lock calls to windows in memory allocated <br class=""> by MPI_ALLOC_MEM (Section 8.2), MPI_WIN_ALLOCATE (Section 11.2.2), <br class=""> MPI_WIN_ALLOCATE_SHARED (Section 11.2.3),<br class=""> or attached with MPI_WIN_ATTACH (Section 11.2.4). <br class=""> Locks can be used portably only in such memory.<br class=""><br class=""><br class="">2. In MPI-3.1 Section 11.5.5 Assertions, the wording of data <br class=""> transfer assertions does not fit to shared memory windows.<br class=""> I exxpect the fact is clear because the window portions <br class=""> of process rank a can be written or read by all process<br class=""> unsing direct language assignments or expressions, resprectively,<br class=""> instead of only using RMA calls and direct accesses only to<br class=""> the own window Portion.<br class=""><br class=""> Therefore, I would add at the end of the following items<br class=""> - MPI_WIN_POST: MPI_MODE_NOSTORE and MPI_MODE_NOPUT<br class=""> - MPI_WIN_FENCE: at all four assertions<br class=""> the following sentence:<br class=""><br class=""> This assertion does not apply and is therefore ignored<br class=""> in the case of a window created with MPI_WIN_ALLOCATE_SHARED.<br class=""><br class="">Best regards<br class="">Rolf<br class=""><br class=""><br class="">----- Original Message -----<br class=""><blockquote type="cite" class="">From: "MPI WG Remote Memory Access working group" <<a href="mailto:mpiwg-rma@lists.mpi-forum.org" class="">mpiwg-rma@lists.mpi-forum.org</a>><br class="">To: "MPI WG Remote Memory Access working group" <<a href="mailto:mpiwg-rma@lists.mpi-forum.org" class="">mpiwg-rma@lists.mpi-forum.org</a>><br class="">Cc: "wgropp" <<a href="mailto:wgropp@illinois.edu" class="">wgropp@illinois.edu</a>><br class="">Sent: Friday, September 21, 2018 3:52:07 PM<br class="">Subject: [mpiwg-rma] Summary of MPI RMA WG meeting on Sept 21, 2018<br class=""></blockquote><br class=""><blockquote type="cite" class="">MPI RMA Working Group Meeting<br class="">September 21, 2018<br class="">Barcelona, Spain<br class=""><br class="">(I couldn’t find the RMA wiki - did this never get migrated?)<br class=""><br class="">Here are my notes. For those present, please let me know if I missed anything.<br class=""><br class="">- Interoperability of MPI shared memory with C11, C++11 language semantics.<br class="">Lead: Bill Gropp<br class=""><br class="">No document.<br class=""><br class="">Discussion. WG agreed that MPI should remain as consistent as possible with the<br class="">language standards as they go forward, noting that there are still limitations<br class="">in their descriptions of shared memory.<br class=""><br class="">- MPI Generalized atomics. Lead: Pavan Balaji<br class=""><br class="">PDF attached.<br class=""><br class="">Generally positive response to the text. A few suggestions were made:<br class="">For accumulate_op - allow compare-and-swap (CAS) with another op (as an option)<br class="">For which_accumulate_op - Consider a comma-separated list of operators<br class="">for accumulate_max_bytes. There was a discussion about whether this should be<br class="">count or bytes, and how exactly to describe what the max means in terms of<br class="">parameters to the accumulate operations.<br class=""><br class="">- Neighborhood communication in RMA. Lead: Nathan Hjelm<br class=""><br class="">No document.<br class=""><br class="">Nathan presented the basic ideas, which is the use of the topology attached to a<br class="">communicator to limit the permitted communication partners, and thus simplify<br class="">and reduce the memory requirements in the implementation.<br class=""><br class="">The WG was interested, and on a straw vote of 6-0-2, asked for a more detailed<br class="">presentation (e.g., a Powerpoint, not yet a written standard proposal),<br class=""><br class="">- Nonblocking RMA synchronization. Lead: Pavan Balaji<br class=""><br class="">No document, but this might be absorbed by TonyS's proposal.<br class=""><br class="">The WG would like a top-level conceptual discussion of this in the context of<br class="">MPI.<br class=""><br class="">- RMA Notify. Leads: Jim and Torsten<br class=""><br class="">[ <a href="https://github.com/mpi-forum/mpi-issues/issues/59" class="">https://github.com/mpi-forum/mpi-issues/issues/59</a> |<br class=""><a href="https://github.com/mpi-forum/mpi-issues/issues/59" class="">https://github.com/mpi-forum/mpi-issues/issues/59</a> ]<br class=""><br class="">Some discussion. Noted that the state-of-the-art has advanced since the last<br class="">discussion in detail by the working group. The WG, on a 7-0-1 straw vote, would<br class="">like an update to the possible options here. It was also noted that Bull is<br class="">experimenting with different options in different systems, and may be able to<br class="">share the results in a few months.<br class=""><br class="">One question to consider is the pros and cons of using requests as the<br class="">notification mechanism; it was noted that one pragmatic if not elegant solution<br class="">might be to allow both a heavy (e.g., MPI_Request) and lightweight (e.g.,<br class="">memory variable) notification approach.<br class=""><br class="">- MPI_IN_PLACE semantics for collectives on shared memory. Lead: Pavan Balaji<br class=""><br class="">PDF attached.<br class=""><br class="">This was reviewed by the WG. When asked whether there was interest in a document<br class="">with a usage example and discussion of benefits, the WG voted no in a straw<br class="">poll with 0-4-2 (and 2 not voting).<br class=""><br class="">- Relax constraints on MPI_WIN_SHARED_QUERY. Lead: Jeff Hammond<br class=""><br class="">[ <a href="https://github.com/mpi-forum/mpi-issues/issues/23" class="">https://github.com/mpi-forum/mpi-issues/issues/23</a> |<br class=""><a href="https://github.com/mpi-forum/mpi-issues/issues/23" class="">https://github.com/mpi-forum/mpi-issues/issues/23</a> ]<br class=""><br class="">The WG found support for this and voted 5-1-2 for a full written proposal<br class=""><br class="">- Add flush_thread synchronization calls. Lead: Nathan Hjelm<br class=""><br class="">No document.<br class=""><br class="">Nathan presented this and the WG voted 7-0-2 for a more detailed (e.g., a<br class="">Powerpoint) description.<br class=""><br class="">The WG would like to see the written proposals for generalized atomics and<br class="">shared_query in time for reading at the December MPI Forum meeting.<br class=""><br class=""><br class="">The WG adjourned after discussing these items.<br class=""><br class="">William Gropp<br class="">Director and Chief Scientist, NCSA<br class="">Thomas M. Siebel Chair in Computer Science<br class="">University of Illinois Urbana-Champaign<br class=""><br class=""><br class=""><br class=""><br class=""><br class=""><br class=""><br class="">_______________________________________________<br class="">mpiwg-rma mailing list<br class=""><a href="mailto:mpiwg-rma@lists.mpi-forum.org" class="">mpiwg-rma@lists.mpi-forum.org</a><br class="">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma<br class=""></blockquote><br class="">-- <br class="">Dr. Rolf Rabenseifner . . . . . . . . . .. email <a href="mailto:rabenseifner@hlrs.de" class="">rabenseifner@hlrs.de</a> .<br class="">High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .<br class="">University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .<br class="">Head of Dpmt Parallel Computing . . . <a href="http://www.hlrs.de/people/rabenseifner" class="">www.hlrs.de/people/rabenseifner</a> .<br class="">Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .<br class=""></div></div></blockquote></div><br class=""></div></body></html>