<font face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size="2">Touching on the GPU issue is the GPU Direct Async feature. Nothing in the MPI standard supports the Async concept on an accelerator. Any wait/flush or equivalent call would always assume this is done on the CPU which defeats the purpose of Async on GPU. Implementing the feature on any of the current MPI APIs would either introduce limitation to the feature that negates its benefit or would break the standard. <div><br></div><div>I wonder if we could start a separate email chain to discuss/brainstorm possible ideas ??</div><div><br></div><div>Thanks</div><div>Sameh</div><div><br><br><font color="#990099">-----"mpiwg-rma" <<a href="mailto:mpiwg-rma-bounces@lists.mpi-forum.org" target="_blank">mpiwg-rma-bounces@lists.mpi-forum.org</a>> wrote: -----</font><div class="iNotesHistory" style="padding-left:5px;"><div style="padding-right:0px;padding-left:5px;border-left:solid black 2px;">To: <a href="mailto:mark.hoemmen@gmail.com" target="_blank">mark.hoemmen@gmail.com</a>, MPI WG Remote Memory Access working group <<a href="mailto:mpiwg-rma@lists.mpi-forum.org" target="_blank">mpiwg-rma@lists.mpi-forum.org</a>><br>From: Jeff Hammond via mpiwg-rma <mpiwg-rma@lists.mpi-forum.org><br>Sent by: "mpiwg-rma" <mpiwg-rma-bounces@lists.mpi-forum.org><br>Date: 09/12/2018 12:38AM<br>Cc: Jeff Hammond <<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>>, Torsten Hoefler <<a href="mailto:htor@inf.ethz.ch" target="_blank">htor@inf.ethz.ch</a>>, James Dinan <<a href="mailto:james.dinan@intel.com" target="_blank">james.dinan@intel.com</a>>, Jeff Hammond <<a href="mailto:jeff.r.hammond@intel.com" target="_blank">jeff.r.hammond@intel.com</a>>, Nathan Hjelm <<a href="mailto:hjelmn@lanl.gov" target="_blank">hjelmn@lanl.gov</a>><br>Subject: Re: [mpiwg-rma] RMA WG Telecon<br><br><!--Notes ACF
<meta http-equiv="content-type" content="text/html; charset=utf8">-->Noncoherent shared memory was discussed in MPI-3 days but we have no hardware to motivate it besides Intel SCC and that was not widely available. <div><br></div><div>Now that things like NVIDIA UM exist, there is a nontrivial hardware platform that think about if we try to define such features. However, I’m not aware of NVIDIA publishing a formal memory model for CUDA or any generation of hardware. Furthermore, one cannot - AFAIK - run an real MPI process on a GPU, so one would only be able to reason about noncoherent window memory living on GPU(s) but accessed only by CPUs. That’s much less interesting. </div><div><br></div><div>Anyways, Jeff who works for Intel defers to literally anybody else who lays claim to superior understanding of NVIDIA stuff. </div><div><br></div><div>Another angle would be to try to align MPI shared memory with OpenCL 2.2 SVM variants. I know exactly who to talk to about that topic.</div><div><br></div><div>Jeff<br><br><div id="AppleMailSignature">Sent from my iPhone</div><div><br>On Sep 11, 2018, at 10:26 PM, Mark Hoemmen via mpiwg-rma <<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a>> wrote:<br><br></div><blockquote type="cite"><div><div><div dir="auto">I’m sorry to have missed this — I’m very much interested in C++ binding and interoperability issues. I know Jeff at least is involved in our weekly C++ heterogeneity phone meetings — we’re trying to move the Standard to support multiple memory spaces in a sane way, and I think it helps to have RDMA represented there as well. We’ve also thought a bit about wrapping MPI RDMA in Kokkos, and about making the mdspan proposal flexible enough that we could perhaps support RDMA with it as well. The most interesting bit might be actually introducing semantics for noncoherent shared memory into the Standard — I think we want to go there eventually but it is really hard, and I think y’all are well equipped to help. </div></div><div dir="auto"><br></div><div dir="auto">Best, </div><div dir="auto">Mark Hoemmen</div><div dir="auto"><br></div><div><div class="gmail_quote"><div dir="ltr">On Mon, Sep 10, 2018 at 1:21 PM Balaji, Pavan via mpiwg-rma <<a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">All,<br><br>We are restarting the RMA WG telecons. A subset of the following proposals will be discussed during this call.<br><br>- Interoperability of MPI shared memory with C11, C++11 language semantics. Lead: Bill Gropp<br>- MPI Generalized atomics. Lead: Pavan Balaji<br>- Neighborhood communication in RMA. Lead: Nathan Hjelm<br>- Nonblocking RMA synchronization. Lead: Pavan Balaji<br>- RMA Notify. Leads: Jim and Torsten<br>- MPI_IN_PLACE semantics for collectives on shared memory. Lead: Pavan Balaji<br>- Relax constraints on MPI_WIN_SHARED_QUERY. Lead: Jeff Hammond<br>- <Unnamed proposal>. Lead: Nathan Hjelm_______________________________________________<br>mpiwg-rma mailing list<br><a href="mailto:mpiwg-rma@lists.mpi-forum.org" target="_blank">mpiwg-rma@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma</a><br></blockquote></div></div></div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>mpiwg-rma mailing list</span><br><span><a href="mailto:mpiwg-rma@lists.mpi-forum.org">mpiwg-rma@lists.mpi-forum.org</a></span><br><span><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma</a></span><br></div></blockquote></div><div><font face="Courier New,Courier,monospace" size="2">_______________________________________________<br>mpiwg-rma mailing list<br><a href="mailto:mpiwg-rma@lists.mpi-forum.org" target="_blank">mpiwg-rma@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma</a><br></font></div></mpiwg-rma-bounces@lists.mpi-forum.org></mpiwg-rma@lists.mpi-forum.org></div></div></div></font><BR>