[mpiwg-rma] RMA WG Telecon

khaled hamidouche khaledhamidouche at gmail.com
Wed Sep 12 12:58:57 CDT 2018


Yes, GPU Direct Async (GDS), breaks the semantics of MPI as it offloads the
task to the GPU and follows the stream semantics rather than the request
semantics. All MPI Calls are non-blocking from the CPU perspective.

Thanks

On Wed, Sep 12, 2018 at 11:53 AM Sameh S Sharkawi via mpiwg-rma <
mpiwg-rma at lists.mpi-forum.org> wrote:

> Touching on the GPU issue is the GPU Direct Async feature. Nothing in the
> MPI standard supports the Async concept on an accelerator. Any wait/flush
> or equivalent call would always assume this is done on the CPU which
> defeats the purpose of Async on GPU. Implementing the feature on any of the
> current MPI APIs would either introduce limitation to the feature that
> negates its benefit or would break the standard.
>
> I wonder if we could start a separate email chain to discuss/brainstorm
> possible ideas ??
>
> Thanks
> Sameh
>
>
> -----"mpiwg-rma" <mpiwg-rma-bounces at lists.mpi-forum.org> wrote: -----
> To: mark.hoemmen at gmail.com, MPI WG Remote Memory Access working group <
> mpiwg-rma at lists.mpi-forum.org>
> From: Jeff Hammond via mpiwg-rma
> Sent by: "mpiwg-rma"
> Date: 09/12/2018 12:38AM
> Cc: Jeff Hammond <jeff.science at gmail.com>, Torsten Hoefler <
> htor at inf.ethz.ch>, James Dinan <james.dinan at intel.com>, Jeff Hammond <
> jeff.r.hammond at intel.com>, Nathan Hjelm <hjelmn at lanl.gov>
> Subject: Re: [mpiwg-rma] RMA WG Telecon
>
> Noncoherent shared memory was discussed in MPI-3 days but we have no
> hardware to motivate it besides Intel SCC and that was not widely
> available.
>
> Now that things like NVIDIA UM exist, there is a nontrivial hardware
> platform that think about if we try to define such features. However, I’m
> not aware of NVIDIA publishing a formal memory model for CUDA or any
> generation of hardware. Furthermore, one cannot - AFAIK - run an real MPI
> process on a GPU, so one would only be able to reason about noncoherent
> window memory living on GPU(s) but accessed only by CPUs. That’s much less
> interesting.
>
> Anyways, Jeff who works for Intel defers to literally anybody else who
> lays claim to superior understanding of NVIDIA stuff.
>
> Another angle would be to try to align MPI shared memory with OpenCL 2.2
> SVM variants. I know exactly who to talk to about that topic.
>
> Jeff
>
> Sent from my iPhone
>
> On Sep 11, 2018, at 10:26 PM, Mark Hoemmen via mpiwg-rma <
> mpiwg-rma at lists.mpi-forum.org> wrote:
>
> I’m sorry to have missed this — I’m very much interested in C++ binding
> and interoperability issues.  I know Jeff at least is involved in our
> weekly C++ heterogeneity phone meetings — we’re trying to move the Standard
> to support multiple memory spaces in a sane way, and I think it helps to
> have RDMA represented there as well.  We’ve also thought a bit about
> wrapping MPI RDMA in Kokkos, and about making the mdspan proposal flexible
> enough that we could perhaps support RDMA with it as  well. The most
> interesting bit might be actually introducing semantics for noncoherent
> shared memory into the Standard — I think we want to go there eventually
> but it is really hard, and I think y’all are well equipped to help.
>
> Best,
> Mark Hoemmen
>
> On Mon, Sep 10, 2018 at 1:21 PM Balaji, Pavan via mpiwg-rma <
> mpiwg-rma at lists.mpi-forum.org> wrote:
>
>> All,
>>
>> We are restarting the RMA WG telecons.    A subset of the following
>> proposals will be discussed during this call.
>>
>> - Interoperability of MPI shared memory with C11, C++11 language
>> semantics.  Lead: Bill Gropp
>> - MPI Generalized atomics.  Lead: Pavan Balaji
>> - Neighborhood communication in RMA.  Lead: Nathan Hjelm
>> - Nonblocking RMA synchronization.  Lead: Pavan Balaji
>> - RMA Notify.  Leads: Jim and Torsten
>> - MPI_IN_PLACE semantics for collectives on shared memory.  Lead: Pavan
>> Balaji
>> - Relax constraints on MPI_WIN_SHARED_QUERY.  Lead: Jeff Hammond
>> - <Unnamed proposal>.  Lead: Nathan
>> Hjelm_______________________________________________
>> mpiwg-rma mailing list
>> mpiwg-rma at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>
> _______________________________________________
> mpiwg-rma mailing list
> mpiwg-rma at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-rma
>


-- 
 K.H
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20180912/11922a8f/attachment.html>


More information about the mpiwg-rma mailing list