[mpiwg-hybridpm] Call for Topics
Choi, Jaemin
jchoi157 at illinois.edu
Tue Feb 9 21:23:52 CST 2021
I think an example is when you want to invoke a GPU kernel that depends on the arriving data.
Currently the host code has to wait until the MPI_Recv is complete before it is able to invoke the kernel, which can be avoided if CUDA streams are supported in MPI.
Jaemin Choi
PhD Candidate in Computer Science
Research Assistant at the Parallel Programming Laboratory
University of Illinois Urbana-Champaign
From: Zhang, Junchao <jczhang at mcs.anl.gov>
Sent: Tuesday, February 9, 2021 9:09 PM
To: Hybrid working group mailing list <mpiwg-hybridpm at lists.mpi-forum.org>
Cc: Jim Dinan <james.dinan at gmail.com>; Choi, Jaemin <jchoi157 at illinois.edu>
Subject: Re: [mpiwg-hybridpm] Call for Topics
Is the host-side synchronization after communication needed? I thought no.
Thanks
--Junchao Zhang
On Feb 9, 2021, at 8:19 PM, Choi, Jaemin via mpiwg-hybridpm <mpiwg-hybridpm at lists.mpi-forum.org<mailto:mpiwg-hybridpm at lists.mpi-forum.org>> wrote:
Thanks Jim, I must have missed the discussions from the last couple of meetings.
We’ve recently implemented GPU-aware communication in Charm++ and Adaptive MPI using UCX, and just started to look into how we can avoid the host-side synchronizations before/after communication.
UCX supporting CUDA streams could be a solution, but I also wanted to explore other options as well. I wasn’t aware of libmp, I will definitely have a look at that and NCCL to see how streams are integrated there.
Best,
Jaemin Choi
PhD Candidate in Computer Science
Research Assistant at the Parallel Programming Laboratory
University of Illinois Urbana-Champaign
From: Jim Dinan <james.dinan at gmail.com<mailto:james.dinan at gmail.com>>
Sent: Tuesday, February 9, 2021 6:08 PM
To: Choi, Jaemin <jchoi157 at illinois.edu<mailto:jchoi157 at illinois.edu>>
Cc: Hybrid working group mailing list <mpiwg-hybridpm at lists.mpi-forum.org<mailto:mpiwg-hybridpm at lists.mpi-forum.org>>
Subject: Re: [mpiwg-hybridpm] Call for Topics
We have been discussing this topic in the working group, and would be glad to have your input. If you're looking for something you can use today, NCCL is a good option. There's also a library called libmp that uses GPUDirect Async for stream-based communication [1].
~Jim.
[1] https://github.com/gpudirect/libmp<https://urldefense.com/v3/__https:/github.com/gpudirect/libmp__;!!DZ3fjg!q1o2QIbKjMPaBeDqe6PrUc_204yx9iZ6vbdOXkEGLP--fM2nnq98nYn1-18ZkCEVqcUAjg$>
On Tue, Feb 9, 2021 at 3:09 PM Choi, Jaemin <jchoi157 at illinois.edu<mailto:jchoi157 at illinois.edu>> wrote:
Hi Jim,
Do you happen to know if there has been progress on supporting CUDA streams in the MPI standard?
I’d be interested to hear about this or other frameworks (e.g. NCCL) that support asynchronous communication via streams.
Best,
Jaemin Choi
PhD Candidate in Computer Science
Research Assistant at the Parallel Programming Laboratory
University of Illinois Urbana-Champaign
From: mpiwg-hybridpm <mpiwg-hybridpm-bounces at lists.mpi-forum.org<mailto:mpiwg-hybridpm-bounces at lists.mpi-forum.org>> On Behalf Of Jim Dinan via mpiwg-hybridpm
Sent: Tuesday, February 9, 2021 1:10 PM
To: Hybrid working group mailing list <mpiwg-hybridpm at lists.mpi-forum.org<mailto:mpiwg-hybridpm at lists.mpi-forum.org>>
Cc: Jim Dinan <james.dinan at gmail.com<mailto:james.dinan at gmail.com>>
Subject: [mpiwg-hybridpm] Call for Topics
Hi All,
The Hybrid & Accelerator WG will meet tomorrow. The agenda so far is empty. Please let me know if you have any topics that you would like to discuss.
Cheers,
~Jim.
_______________________________________________
mpiwg-hybridpm mailing list
mpiwg-hybridpm at lists.mpi-forum.org<mailto:mpiwg-hybridpm at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm<https://urldefense.com/v3/__https:/lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm__;!!DZ3fjg!v8lgZUpa2hzXDoU-brjVg-at7mu1xO718GusJEmp7kN4j0zcvSqy-mW8vCRRJgPGFi5huw$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20210210/a3ac25d2/attachment-0001.html>
More information about the mpiwg-hybridpm
mailing list