[mpiwg-sessions] [EXTERNAL] Re: more excitement - more nuanced response to issue 435

Pritchard Jr., Howard howardp at lanl.gov
Fri Feb 19 16:48:49 CST 2021


HI Folks,

Revisiting again MPI_Comm_free, I notice some somewhat hand waving advice to implementors:

Advice to implementors. Though collective, it is anticipated that this operation will
normally be implemented to be local, though a debugging version of an MPI library
might choose to synchronize. (End of advice to implementors.)

Maybe incorporate some wording like that into Rolf’s text and add some verbiage about usability or lack thereof of MPI objects up return from mpi_session_finalize?

Based on responses I’ll work something up tomorrow and open a PR – unless we think it’s too early to do that.

Howard

From: mpiwg-sessions <mpiwg-sessions-bounces at lists.mpi-forum.org> on behalf of MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org>
Reply-To: MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org>
Date: Friday, February 19, 2021 at 1:17 PM
To: Martin Schulz <schulzm at in.tum.de>
Cc: Daniel Holmes <danholmes at chi.scot>, MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org>
Subject: [EXTERNAL] Re: [mpiwg-sessions] more excitement - more nuanced response to issue 435

Hi Martin,

Comm size example: local procedure reads local information, result is 2, no problems.

Pending window sync: erroneous, "user must call all procedures necessary to complete involvement in communication", no problems.

One-sided passive target put from left to right: seriously bad idea, might work if target hardware capable, might fail if target needs software agent, although such agent should still be active because window has not been freed, discovered by win unlock, if it must fail then it should probably be prohibited in mpi-4.1 somehow - wording is complex.

What about process ending normally/properly immediately after session finalize?

Note local is weak local - permitted to depend on remote progress - making sure stuff gets onto/off of the wire before returning is permitted.

Intrigue galore!

Cheers,
Dan.


19 Feb 2021 19:48:43 Martin Schulz <schulzm at in.tum.de>:
Hi Dan, all,


To the first part: I would also prefer Rolf’s text with the unspecified collectives (we probably need to add files and windows, but that is easy) – it leave quite a bit of wiggle room. With Ialltoall, I think we would be overspecifying, as this would declare session_finalize not only collective, but also synchronizing.


However, also here we don’t say what really happens with the MPI objects and what can and cannot be done afterwards – I think it would be legal (as of now) to create a persistent communication, call finalize (no pending communication) and then call start. We obviously don’t want that.




As for the second part:


Branch at Q1, freeing all – at the end, I am not sure that this against the rule of least astonishment. It is different than Finalize, but that should not be astonishing. I would compare this to memory allocation: if I write a program that allocates a lot of memory, I normally don’t bother freeing it before exit, I simply don’t care. If I write a library with an “end” call, though, I better clean everything up or I wrote a bad library. I think this would be second nature to programmers.


If we go on the premise, though, that this is too much for MPI programmers, I principally like the idea of keeping session_finalize local, as this is the only option to keep init and finalize parallel, i.e., have matching semantics. However, I am not sure what this would mean in some areas:


Session Init A                                      Session Init A
C1=from_g(from_s,A)                     C1=from_g(from_s,A)
                                                                Session Finalize
Barrier on COMM_WORLD           Barrier on COMM_WORLD
Comm_get_size(C1)


What is the value here?


If we create a window with memory from 3 processes and then one finalizes the session, is the remaining memory still available? What does this mean for pending synchronization operations and open communication windows between the other 2?


We can probably construct more corner cases.


In general, I am not trying to shoot down the idea (I find it intriguing, as it preserves the symmetry if init and finalize), but I think it could have quite some consequences. Forcing a free or describing the interleaved collective nature would be a small step (text and correctness wise) and would allow most (all?) other solutions to be added in 4.1 with some time to discuss it.


Martin






--
Prof. Dr. Martin Schulz, Chair of Computer Architecture and Parallel Systems
Department of Informatics, TU-Munich, Boltzmannstraße 3, D-85748 Garching
Member of the Board of Directors at the Leibniz Supercomputing Centre (LRZ)
Email: schulzm at in.tum.de






From: mpiwg-sessions <mpiwg-sessions-bounces at lists.mpi-forum.org> on behalf of Dan Holmes via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org>
Reply-To: MPI Sessions working group <mpiwg-sessions at lists.mpi-forum.org>
Date: Friday, 19. February 2021 at 20:16
To: mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org>
Cc: Dan Holmes <danholmes at chi.scot>
Subject: Re: [mpiwg-sessions] more excitement - more nuanced response to issue 435


Hi Howard,


My initial impression (from reading your email but not looking at the PDF yet), is:
* I much prefer Rolf’s suggested reference to generic/unspecified “collective operations” rather than nailing it down to MPI_Ialltoall.
* I don’t like the restriction that the user must finalise sessions in a particular order to match the internal implementation of a single session finalise at some remote process (e.g. the scenario of Rolf’s case A on issue 435).


More fundamental: we need a decision tree to tease apart the design decisions we are making at pace and with no reference implementation.


First choice: does MPI_SESSION_FINALISE do anything non-local? If so, what?
If no, then next choice is:


Root Q1: Do we wish to mandate that the user must do clean up prior to MPI_SESSION_FINALISE? If so, then eek! Breach of rule of least astonishment.
Branch Q2: If no, then do we wish to mandate that MPI_SESSION_FINALISE does whatever clean up has not been done by the user? If so, eek! Significant change to accepted text.
Branch Q3: If no, then does MPI_SESSION_FINALISE do anything non-local? If so, eek! What does it do? Panic.
Branch Q4: If no, then does MPI_SESSION_FINALISE need to be defined as collective? If so, eek! Why? Why does it need that semantic? Panic.
Branch Q5: If no, then does MPI_SESSION_FINALISE need to be defined as non-local? If so, eek! Why? Why does it need that semantic? Panic.
Branch Q6: If no, then we should define MPI_SESSION_FINALISE as local (meaning weak-local, of course)? If so, strike all text about collective operation(s) of any kind and strike any restriction on ordering of calls and strike any restriction on the permitted associations/derivations of communicators from sessions.

This is a linear decision tree that leads to:

"
MPI_SESSION_FINALIZE is a local procedure; it does not free MPI objects derived from the session. It is erroneous to use MPI objects derived from a session after calling MPI_SESSION_FINALIZE for that session.

If the user wishes to recover resources from MPI objects derived from a session, then appropriate calls to MPI procedures must be made by the user prior to calling MPI_SESSION_FINALIZE, such as MPI_COMM_DISCONNECT (from communicators), MPI_WIN_FREE (for windows), and MPI_FILE_CLOSE (for files).
“

Discuss.

Cheers,
Dan.
—
Dr Daniel Holmes PhD
Executive Director
Chief Technology Officer
CHI Ltd
danholmes at chi.scot<mailto:danholmes at chi.scot>





On 19 Feb 2021, at 18:13, Pritchard Jr., Howard via mpiwg-sessions <mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>> wrote:


HI All,

Ah this is exciting.  So I spent some time on baking verbiage to add about MPI_Session_finalize non-local behavior.

See the attached cutout from the results pdf.

I’ve added verbiage describing the semantics (copying some wording from MPI_Sendrecv, or at least the flavor) of session finalize in the event that the user has not cleaned up MPI objects associated with the session(s).
It’s a simple easy to understand (I think) model.  Basically session finalize has the semantics of a MPI_Ialltoall for each communicators still associated with the session at finalize, followed by a waitall.  As long as all other processes finalizing their sessions generate in aggregate, a message pattern which matches, no deadlock.  If not,  potential deadlock.  One takeaway from this is that we can’t support arbitrary associations of communicators to sessions in each MPI process when the app doesn’t do its own cleanup so as to make MPI_Session_finalize a local op.

I’ve added some examples and we can add more as we think needed.  May have to change the presentation mechanism however.

I didn’t want to open this as a PR at this point, hence this notification mechanism.

Howard

--

Howard Pritchard
HPC-ENV
Los Alamos National Laboratory

<temp.pdf>_______________________________________________
mpiwg-sessions mailing list
mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20210219/380bceb3/attachment-0001.html>


More information about the mpiwg-sessions mailing list