[mpiwg-hybridpm] Meeting for June 12, 2024

Jim Dinan james.dinan at gmail.com
Wed Jun 12 18:52:59 CDT 2024


Hi Tony,

Apologies for not catching this sooner. There was no WG meeting today. We
moved to biweekly meetings because of reduced activity after the last spec
release. Our next meeting is supposed to be next Wednesday; however, it
lands on the Juneteenth holiday and I expect many folks from the US will be
off work. The next open meeting is July 17, but your topics might also fit
in well with Ryan's on July 3rd if you want to coordinate. Please let me
know when you'd like to cover these topics. Full meeting schedule is posted
here: https://urldefense.us/v3/__https://github.com/mpiwg-hybrid/hybrid-issues/wiki__;!!G_uCfscf7eWS!c5CBn_SUKJL1lbGJXBAkDTIVDF9-2hnDwKce9siwV1BkqiwI8OEEkiKKF9xnbxL7eJYSvkjLTD2G3BSCy_aFCMXY-zftkEVKv-A$ 

Best,
 ~Jim.

On Wed, Jun 5, 2024 at 10:32 AM Skjellum, Anthony via mpiwg-hybridpm <
mpiwg-hybridpm at lists.mpi-forum.org> wrote:

> Jim, and others, I would like to discuss these topics (frozen proposals
> from MPI-4 and 4. 1) at the next meeting on June 12: Pbuf_prepare,
> Parrived_any I would like to see if we can get agreement on these and push
> into MPI-4. 2 or the next increment
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Jim, and others, I would like to discuss these topics (frozen proposals
> from MPI-4 and 4.1) at the next meeting on June 12:
>
> Pbuf_prepare, Parrived_any
>
> I would like to see if we can get agreement on these and push into MPI-4.2
> or the next increment after that, rather than MPI-5.
> I am sure this will take more than one discussion, since we didn't get
> these through before.
>
> Note: They are both marked for MPI-5 currently, presumably because we
> don't have a broad opening for MPI-4.2, and we haven't agreed on an MPI-4.3.
>
>
> Here are the tickets/issues:
>
> https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/537__;!!G_uCfscf7eWS!c5CBn_SUKJL1lbGJXBAkDTIVDF9-2hnDwKce9siwV1BkqiwI8OEEkiKKF9xnbxL7eJYSvkjLTD2G3BSCy_aFCMXY-zftIIu2_ZQ$ 
> <https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/537__;!!G_uCfscf7eWS!ZyZVjdWV6k41axZ3KnxwyMryXZZVlkJeaQidg7_lF3bDw-Fk2VzEaU_IcX7dZ5Cv8ZqLwEhN8CC1BqwKhKj1hFpCxBMDLcELRBJq$>
>  (PR:https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/718__;!!G_uCfscf7eWS!c5CBn_SUKJL1lbGJXBAkDTIVDF9-2hnDwKce9siwV1BkqiwI8OEEkiKKF9xnbxL7eJYSvkjLTD2G3BSCy_aFCMXY-zftLUSZuDs$ 
> <https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/718__;!!G_uCfscf7eWS!ZyZVjdWV6k41axZ3KnxwyMryXZZVlkJeaQidg7_lF3bDw-Fk2VzEaU_IcX7dZ5Cv8ZqLwEhN8CC1BqwKhKj1hFpCxBMDLYShSrSD$> )
> – Ryan prepared the PR long ago.
>
> <https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/537__;!!G_uCfscf7eWS!ZyZVjdWV6k41axZ3KnxwyMryXZZVlkJeaQidg7_lF3bDw-Fk2VzEaU_IcX7dZ5Cv8ZqLwEhN8CC1BqwKhKj1hFpCxBMDLcELRBJq$>
> MPI_Parrived_any API as an addition for Partititioned Communication ·
> Issue #537 · mpi-forum/mpi-issues
> <https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/537__;!!G_uCfscf7eWS!ZyZVjdWV6k41axZ3KnxwyMryXZZVlkJeaQidg7_lF3bDw-Fk2VzEaU_IcX7dZ5Cv8ZqLwEhN8CC1BqwKhKj1hFpCxBMDLcELRBJq$>
> Problem The ability to take the next available partition is not supported
> in the current MPI-4.0 API. Proposal The API: MPI_Parrived_any(MPI_Request
> prequest, int *partition, int *flag); /* C inter...
> github.com
>
>
>
> https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/302__;!!G_uCfscf7eWS!c5CBn_SUKJL1lbGJXBAkDTIVDF9-2hnDwKce9siwV1BkqiwI8OEEkiKKF9xnbxL7eJYSvkjLTD2G3BSCy_aFCMXY-zftslenJVw$ 
> <https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/302__;!!G_uCfscf7eWS!ZyZVjdWV6k41axZ3KnxwyMryXZZVlkJeaQidg7_lF3bDw-Fk2VzEaU_IcX7dZ5Cv8ZqLwEhN8CC1BqwKhKj1hFpCxBMDLUxIuITZ$>
> (with outdated PR:  https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/264__;!!G_uCfscf7eWS!c5CBn_SUKJL1lbGJXBAkDTIVDF9-2hnDwKce9siwV1BkqiwI8OEEkiKKF9xnbxL7eJYSvkjLTD2G3BSCy_aFCMXY-zftsBiRHmM$ 
> <https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/264__;!!G_uCfscf7eWS!ZyZVjdWV6k41axZ3KnxwyMryXZZVlkJeaQidg7_lF3bDw-Fk2VzEaU_IcX7dZ5Cv8ZqLwEhN8CC1BqwKhKj1hFpCxBMDLXOQpejn$>
> ) – Ryan prepared the PR long ago.
>
>
> My apologies for long delays in follow up.  We really need to resolve
> these issues as they impact both point-to-point partitioned and potential
> collective partitioned comms.
>
> I would encourage additional commentary on the issues; Patrick has made
> new, relevant observations about the dual roles of Pbuf_prepare recently on
> ticket#302, for example.
>
> Thank you all,
> Tony
>
> PS After that, I really do intend to hold a collective WG meeting to
> discuss partitioned collective ops 🙂 on June 19, or July 3 or 10, depending
> what works in tandem with the Hybrid calendar.
>
>
> Anthony Skjellum, PhD
> Professor of Computer Science
> Tennessee Technological University
> email: askjellum at tntech.edu
> cell: +1-205-807-4968
>
>
> _______________________________________________
> mpiwg-hybridpm mailing list
> mpiwg-hybridpm at lists.mpi-forum.org
> https://urldefense.us/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpiwg-hybridpm__;!!G_uCfscf7eWS!c5CBn_SUKJL1lbGJXBAkDTIVDF9-2hnDwKce9siwV1BkqiwI8OEEkiKKF9xnbxL7eJYSvkjLTD2G3BSCy_aFCMXY-zftMWvtY2s$ 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20240612/bcf28d00/attachment-0001.html>


More information about the mpiwg-hybridpm mailing list