[Mpi-forum] MPI_Abort - meaning

N.M. Maclaren nmm1 at cam.ac.uk
Tue Apr 6 16:42:08 CDT 2010

On Apr 6 2010, Jeff Squyres wrote:
>> I agree with Richard's reservations, incidentally.  About the best that
>> can be done is to specify an MPI_Quit that stops the calling process
>> cleanly (in the language sense) and has the effect of MPI_Abort on the
>> others.  But, as I said, even that is tricky to implement on some systems
>> and may not be reliable.
> You mentioned this in an earlier post -- can you explain some issues? For 
> example, you mention it might be difficult with systems that use sysv 
> shared memory. Why?

All it can do is detach the segment - if another process that has it
attached goes belly-up, the segment remains lurking.  The point is that,
in ANY design, shared resources can be freed only when the last client
using them has stopped doing so, and (in general) that requires all
clients to clean up properly.

> I ask with the *assumption* that the MPI implementation can/will have 
> some kind of helper run-time system around that can do whatever cleanup 
> is required (MPI-2 dynamics more-or-less require this anyway).

I know :-(  It's one of the many reasons that I think they are a ghastly

> Keep in 
> mind that I'm not saying that the run-time system has to be part of the 
> MPI implementation -- it may simply be provided on the system. For 
> example: if sysv memory segments are left lying around because MPI 
> processes are killed by the run-time system, a high-quality run-time 
> system should be able to clean up the shared memory as well (or at least 
> provide hooks such that MPI processes can tell the run-time system about 
> resources that would need to be cleaned up in the event of untimely MPI 
> process deaths, such as sysv shared memory segments).
>Is that a bad assumption?

Yes.  That can be done on zOS (previously MVS) and, I believe, VMS.
Just.  But you don't have an earthly under almost any Unix-derived system
(including, I believe, all of Microsoft's).  The point is that there is
no way for an unprivileged application to set up a 'super-process' that
can guarantee to get control, and have the power to clean up.

Consider when running under a job scheduler, and it decides to kill the
MPI job.  You can't control which processes it kills and in which order.
In particular, the clean-up process may not run until after the extra
time allowed for cleaning up has passed.  So you need to make the job
scheduler the clean-up process or integrate it with the MPI.

But it gets worse.  Consider a MPI process that is waiting on a TCP/IP
access.  In general, no unprivileged code can kill that process and put
it into termination to release the resources.  So you now need to
integrate the operating system with the job scheduler with the MPI.
And many Unices have quite a lot of resources like that.

When you have NFS access problems (not exactly rare), even that may
not work ....

Nick Maclaren.

More information about the mpi-forum mailing list