[mpiwg-hybridpm] Call for topics

Rohit Zambre rzambre at nvidia.com
Tue Oct 14 12:34:46 CDT 2025


When I had looked at endpoints, I used this ticket from Jim as a reference for the Endpoints’ proposal — https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/56__;!!G_uCfscf7eWS!bJjsDqqZdlUO-fMjbpZmBQ7geTx1mng5dCINQQ6cY9PX15u3w2bPS22z65WeOyBV53J3idmEyiyZziKHaW0sVW7j9PqKkns$ 

I think the endpoints solution seamlessly addresses a variety of use cases in hybrid MPI+X programming models .

Regards,
Rohit

From: mpiwg-hybridpm <mpiwg-hybridpm-bounces at lists.mpi-forum.org> on behalf of Joseph Schuchart via mpiwg-hybridpm <mpiwg-hybridpm at lists.mpi-forum.org>
Date: Monday, October 13, 2025 at 6:58 AM
To: Hybrid working group mailing list <mpiwg-hybridpm at lists.mpi-forum.org>
Cc: Joseph Schuchart <joseph.schuchart at stonybrook.edu>
Subject: Re: [mpiwg-hybridpm] Call for topics

External email: Use caution opening links or attachments

This Message Is From an External Sender
This message came from outside your organization.


Multi-device collectives are something we need to look into. Endpoints
seem a good starting point. Was there ever a full proposal for it? A
quick search only brought up Jim's 2014 paper [1] and some related papers.

I put it on the list of topics on the wiki [2]. We'll go over it this
Wednesday.

Cheers
Joseph

[1] https://urldefense.us/v3/__https://dl.acm.org/doi/abs/10.1177/1094342014548772__;!!G_uCfscf7eWS!eSVtCXaTI6hHjR_d6d6R9hMm-qfa8Q1mC3ow2ZBgyL8zgM1iwroCtKpdLnqt62znkU6fIt69MuhsreKMSvdb_F0OP5IB0Ogr1XFH8bzSHnhfgxcQ$
[2] https://urldefense.us/v3/__https://github.com/mpiwg-hybrid/hybrid-issues/wiki*active-topics__;Iw!!G_uCfscf7eWS!eSVtCXaTI6hHjR_d6d6R9hMm-qfa8Q1mC3ow2ZBgyL8zgM1iwroCtKpdLnqt62znkU6fIt69MuhsreKMSvdb_F0OP5IB0Ogr1XFH8bzSHqTfa036$

On 10/11/25 04:55, Jeff Hammond wrote:
> I am curious if people have any appetite to bring back endpoints, not
> for the original purpose, but in order to support GPU ranks.  This is
> one of the features that MPI lacks relative to NCCL that limit its use
> in some AI applications.  Some AI users do not want to create >1
> process per node just to manage a GPU, because this makes their
> pre/post-processing much harder.  They have multithreaded code for
> that already and don't want to rewrite it or play silly games with
> affinity to try to make the application efficient while idling GPU
> management processes.
>
> Jeff
>
> On Fri, Oct 10, 2025 at 5:34 PM Joseph Schuchart via mpiwg-hybridpm
> <mpiwg-hybridpm at lists.mpi-forum.org> wrote:
>> Hi all, With the September Forum meeting behind us we should pick us the meetings again. Please let me know if you have any topics you want to discuss. We plan to call for joint meeting with the RMA WG to discuss device-side operations but the
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>> Hi all,
>>
>> With the September Forum meeting behind us we should pick us the
>> meetings again. Please let me know if you have any topics you want to
>> discuss.
>>
>> We plan to call for joint meeting with the RMA WG to discuss device-side
>> operations but the RMA WG first needs to go through the feedback from
>> the Forum meeting. Given that people are busy with SC prep, it will
>> probably slip into December.
>>
>> Other topics are of course welcome :)
>>
>> Thanks
>> Joseph
>>
>> _______________________________________________
>> mpiwg-hybridpm mailing list
>> mpiwg-hybridpm at lists.mpi-forum.org
>> https://urldefense.us/v3/__https://protect.checkpoint.com/v2/r01/___https:/*lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm___.YzJ1OnN0b255YnJvb2s6YzpnOmU3MGE3NzE0ZTc3Yjk1NGQ1YzgxYzVkYWU1NGU4OTQ2Ojc6NTc0NTplZDExNjc0MjQyOTdkYjlhYTU2ODNlYWVmYTI2YTFiNTU4YmQ4MzEyN2JkNDEyNjczMTBmNmJjYzE4NWQ4MGNhOnA6VDpG__;Lw!!G_uCfscf7eWS!eSVtCXaTI6hHjR_d6d6R9hMm-qfa8Q1mC3ow2ZBgyL8zgM1iwroCtKpdLnqt62znkU6fIt69MuhsreKMSvdb_F0OP5IB0Ogr1XFH8bzSHnyXu4rr$
>
>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20251014/8b222622/attachment-0001.html>


More information about the mpiwg-hybridpm mailing list