[mpiwg-sessions] [EXTERNAL] RE: MPI_Session_init semantics question/poll
Holmes, Daniel John
daniel.john.holmes at intel.com
Thu Jan 5 06:50:15 CST 2023
Hi Martin,
This seems like the best reason so far for differentiating “immediate” as the term that refers to the stronger semantic.
immediate = prohibited to delay its return – neither until (remote) progress nor until (remote) specific semantically-related MPI procedure call
local = permitted to delay its return until (remote) progress but not until (remote) specific semantically-related MPI procedure call
nonlocal = permitted to delay its return (remote) progress and/or until (remote) specific semantically-related MPI procedure call
Permitting progress in an API that was intended to be “lightweight” is bad. Progress could mean that MPI_Session_init delays its return while it pushes a few GB of buffered-mode send message data into a network – correctness is not affected, but performance expectations are. The canonical example is whether MPI_Comm_rank can delay its return while it does (remote) progress – currently “yes because local, but why would anyone implement it that way? It should be immediate.” The recent debate about whether MPI_Put is nonlocal is relevant here – correctness allows us to say “it doesn’t matter if MPI_Put is nonlocal because the user cannot create a deadlock” but the performance expectations of separating synchronisation from data movement instructions suggest that MPI_Put should be immediate, not even local, but definitely not nonlocal.
I would be much happier with some kind of explicit “update the list of process set names” API that has whatever semantic is needed, rather than allowing (remote) progress in any of the existing immediate APIs: “initialise a session”, “get the number of process set names”, “get the nth process set name”, and “make a group from this process set name”.
The trouble is: what is the appropriate semantic for the “update the list of process set names” API? We already have MPI_Session_init that will give an updated list of process set names if it is already different to previous lists given by previous sessions. We only need a “delay until something changes” semantic. What needs to change to satisfy this semantic? Is the first difference sufficient? Can/should the user specify what they are looking for? Is any difference actually necessary (should it have a timeout, in case nothing changes)? Is it collective (over which group)? Those questions are currently answered by the mechanism(s) that could possibly cause change to the list of process sets – spawn is an MPI operation, connect/accept are MPI operations, and so on. Initialising another session after one of these operations has completed will already get the up-to-date list of process sets (whether it changed because of the dynamic process model operation or not). Any future proposal for an API that extends/modifies/prunes the list of process set names needs to have a semantic that permits the appropriate consensus in its implementation. The consensus is needed by operations like “give me more resources”, “take these resources back”, etc – not by query functions.
Best wishes,
Dan.
From: Martin Schulz <schulzm at in.tum.de>
Sent: 04 January 2023 22:43
To: Holmes, Daniel John <daniel.john.holmes at intel.com>; MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org>
Cc: Pritchard Jr., Howard <howardp at lanl.gov>
Subject: Re: [mpiwg-sessions] [EXTERNAL] RE: MPI_Session_init semantics question/poll
Hi Dan,
I fully agree with you on the difference between MPI and PVM and the need to keep things constant from the user’s perspective. They should not be impacted by this and hence also the agreement that none of this can be non-local (as this will impact the user).
However, at some point the underlying runtime will have to come to a consensus and that may come after a user would expect it relative to a local procedure call. This is not likely or probably not even possible with the current static scheme and implementation, but it will likely happen when we add more dynamic behavior.
An example could be newly spawned processes and when they are ready and have established all their individual process set memberships. This could be, of course, pushed off to the user who would need to ensure the right timing, but it may also be better to give the runtime some leeway – again, not in the sense of non-local, but in the sense of weak progress.
This was actually my understanding of weak progress anyway – that any MPI routine could delay return for weak progress, but he hardened that only recently to operations only. For the current set of MPI routines that actually does not make a difference in execution, but here we could – by accident – add a limiter for future implementations and this is what I would like to avoid by opening up the chance for these routines to participate in progress.
Martin
--
Prof. Dr. Martin Schulz, Chair of Computer Architecture and Parallel Systems
Department of Informatics, TU-Munich, Boltzmannstraße 3, D-85748 Garching
Member of the Board of Directors at the Leibniz Supercomputing Centre (LRZ)
Email: schulzm at in.tum.de<mailto:schulzm at in.tum.de>
From: "Holmes, Daniel John" <daniel.john.holmes at intel.com<mailto:daniel.john.holmes at intel.com>>
Date: Wednesday, 4. January 2023 at 10:17
To: Martin Schulz <schulzm at in.tum.de<mailto:schulzm at in.tum.de>>, MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>
Cc: "Pritchard Jr., Howard" <howardp at lanl.gov<mailto:howardp at lanl.gov>>
Subject: RE: [mpiwg-sessions] [EXTERNAL] RE: MPI_Session_init semantics question/poll
Hi Martin,
MPI is not PVM. We do not wait to see which/how many processes start and join the group/process set before deciding on the membership of the group/process set. The names and the membership of all (built-in/predefined) process sets are known a priori without coordination during the initialisation procedure call(s). Deviation from that membership (e.g. a process fails to start or fails to join up with the other processes) is a fault, which will cause a failure (e.g. a collective operation cannot complete), which will manifest as an error. The process set still exists and a group can still be formed from it; the communicator creation procedure that uses that group will raise an error.
For scenarios/implementations where additional process sets “appear” during the execution, those new process sets might not appear until all involved processes will see the same new set name (depending on what the implementation can support); that might mean every involved process will have to have done some progress after the process set was created internally before any process will expose it to the user via MPI calls. That delay must never happen for the built-in/predefined process sets, so we have no conflict or difficulty.
Best wishes,
Dan.
From: Martin Schulz <schulzm at in.tum.de<mailto:schulzm at in.tum.de>>
Sent: 04 January 2023 19:43
To: MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>; Holmes, Daniel John <daniel.john.holmes at intel.com<mailto:daniel.john.holmes at intel.com>>
Cc: Pritchard Jr., Howard <howardp at lanl.gov<mailto:howardp at lanl.gov>>
Subject: Re: [mpiwg-sessions] [EXTERNAL] RE: MPI_Session_init semantics question/poll
Hi all,
I agree with this interpretation – I always thought that was the original intent; non-local work should be able to be push off to the first communicator creation.
The question about it being an operation and/or a local call is interesting, though – I tend to also see it the same as Dan, but is there a scenario in implementations that may require some kind of progress in other MPI processes (e.g., to internally synchronize on process sets)? If so, would we have to classify at least some calls (perhaps only the query of the process sets) as (local) operations so we can mandate progress? Or maybe “have to” is to harsh, but it would implementations to be more efficient?
Martin
--
Prof. Dr. Martin Schulz, Chair of Computer Architecture and Parallel Systems
Department of Informatics, TU-Munich, Boltzmannstraße 3, D-85748 Garching
Member of the Board of Directors at the Leibniz Supercomputing Centre (LRZ)
Email: schulzm at in.tum.de<mailto:schulzm at in.tum.de>
From: mpiwg-sessions <mpiwg-sessions-bounces at lists.mpi-forum.org<mailto:mpiwg-sessions-bounces at lists.mpi-forum.org>> on behalf of "Pritchard Jr., Howard via mpiwg-sessions" <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>
Reply to: MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>
Date: Wednesday, 4. January 2023 at 09:30
To: "Holmes, Daniel John" <daniel.john.holmes at intel.com<mailto:daniel.john.holmes at intel.com>>, MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>
Cc: "Pritchard Jr., Howard" <howardp at lanl.gov<mailto:howardp at lanl.gov>>
Subject: Re: [mpiwg-sessions] [EXTERNAL] RE: MPI_Session_init semantics question/poll
HI Dan,
Yes that was my interpretation as well.
We can discuss at our next meeting 1/9/23 if there’s time.
Howard
From: "Holmes, Daniel John" <daniel.john.holmes at intel.com<mailto:daniel.john.holmes at intel.com>>
Date: Wednesday, January 4, 2023 at 12:05 PM
To: MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>
Cc: "Pritchard Jr., Howard" <howardp at lanl.gov<mailto:howardp at lanl.gov>>
Subject: [EXTERNAL] RE: MPI_Session_init semantics question/poll
Hi Howard,
It was always intended that MPI_Session_init was a local procedure. In fact, “initialise a session” is not even an MPI operation, so it doesn’t make sense for it to be expressed via a nonlocal procedure.
Further, it was intended that the nonlocal portion of the work done by MPI_Init that is eventually needed in the pure sessions pattern would be done during the first nonlocal procedure call in that pattern, as follows:
MPI_Session_init // local – PMIx fence prohibited
MPI_Group_from_pset // local – PMIx fence prohibited
MPI_Comm_create_from_group // nonlocal – PMIx fence permitted, if needed
The nonlocal work should be unnecessary until the first nonlocal procedure call, so this should all work out fine (modulo some refactoring/debugging).
Best wishes,
Dan.
From: mpiwg-sessions <mpiwg-sessions-bounces at lists.mpi-forum.org<mailto:mpiwg-sessions-bounces at lists.mpi-forum.org>> On Behalf Of Pritchard Jr., Howard via mpiwg-sessions
Sent: 04 January 2023 18:32
To: MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>>
Cc: Pritchard Jr., Howard <howardp at lanl.gov<mailto:howardp at lanl.gov>>
Subject: [mpiwg-sessions] MPI_Session_init semantics question/poll
Hi All,
First, Happy New Year!
I’ve got a question about the semantics of MPI_Session_init. In particular, I’d be interested in knowing people’s opinion on whether this function is nonlocal or local.
We don’t have any text in the current version of the standard that states whether or not MPI_Session_init is a nonlocal operation.
I’m considering options for handling this issue: https://github.com/open-mpi/ompi/issues/11166<https://urldefense.com/v3/__https:/github.com/open-mpi/ompi/issues/11166__;!!Bt8fGhp8LhKGRg!CKPfJnVxgJ8KyXfu93oiW-q0IPGmpAtrBZo2vO6bAElAdqtSv6Xv6G48O6Hk2sxr3csENDhZPwUW0mA8_fi98l7TQUw$> . It turns out that the way to properly resolve this issue depends on whether or not MPI_Session_init has local or nonlocal semantics.
I had been working under the assumption that we had intended session initialization to be a local function, but considering how to resolve issue 11166 made me begin to question this assumption.
Thanks for any ideas,
Howard
—
[signature_61897647]
Howard Pritchard
Research Scientist
HPC-ENV
Los Alamos National Laboratory
howardp at lanl.gov<mailto:howardp at lanl.gov>
[signature_1293224934]<https://urldefense.com/v3/__https:/www.instagram.com/losalamosnatlab/__;!!Bt8fGhp8LhKGRg!CKPfJnVxgJ8KyXfu93oiW-q0IPGmpAtrBZo2vO6bAElAdqtSv6Xv6G48O6Hk2sxr3csENDhZPwUW0mA8_fi9Rgwox5A$>[signature_2498822630]<https://urldefense.com/v3/__https:/twitter.com/LosAlamosNatLab__;!!Bt8fGhp8LhKGRg!CKPfJnVxgJ8KyXfu93oiW-q0IPGmpAtrBZo2vO6bAElAdqtSv6Xv6G48O6Hk2sxr3csENDhZPwUW0mA8_fi9vR2-KGc$>[signature_1283032776]<https://urldefense.com/v3/__https:/www.linkedin.com/company/los-alamos-national-laboratory/__;!!Bt8fGhp8LhKGRg!CKPfJnVxgJ8KyXfu93oiW-q0IPGmpAtrBZo2vO6bAElAdqtSv6Xv6G48O6Hk2sxr3csENDhZPwUW0mA8_fi9_F2cjUc$>[signature_3959178607]<https://urldefense.com/v3/__https:/www.facebook.com/LosAlamosNationalLab/__;!!Bt8fGhp8LhKGRg!CKPfJnVxgJ8KyXfu93oiW-q0IPGmpAtrBZo2vO6bAElAdqtSv6Xv6G48O6Hk2sxr3csENDhZPwUW0mA8_fi95RavtTU$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20230105/de4aa5b9/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 4352 bytes
Desc: image001.png
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20230105/de4aa5b9/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 1983 bytes
Desc: image002.png
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20230105/de4aa5b9/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 1519 bytes
Desc: image003.png
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20230105/de4aa5b9/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.png
Type: image/png
Size: 1336 bytes
Desc: image004.png
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20230105/de4aa5b9/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.png
Type: image/png
Size: 1001 bytes
Desc: image005.png
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20230105/de4aa5b9/attachment-0009.png>
More information about the mpiwg-sessions
mailing list