[Mpi3-hybridpm] new text for MPI_INIT/MPI_FINALIZE

Marc Snir snir at mcs.anl.gov
Thu Feb 2 21:39:16 CST 2012


On Feb 2, 2012, at 1:30 PM, Darius Buntinas wrote:

> 
> Hi Marc,
> 
> Page 317, line 18:  Can we change that to:
> "Each MPI process must call an MPI initialization routine, MPI_INIT or MPI_INIT_THREAD, exactly once."
> The way it's currently written seems a little awkward to me.
OK
> 
> P318 L7:  I know you're not changing the current meaning, but that sentence seems to mean that processes must call MPI_FINALIZE if it calls MPI_ABORT.  I suspect the idea is that a program that calls MPI_ABORT does not "exit" but rather "aborts," but the way it's written appears contradictory to me.  Can we change that to:
> "Unless there has been a call to MPI_ABORT, before each process exits, the process must call MPI_FINALIZE.  Each process must ensure that all pending nonblocking communications are (locally) complete before calling MPI_FINALIZE."
OK

> P320, L1:  Here you're adding the requirement that the MPI implementation must track any object it allocates.  MPICH currently keeps track most objects (maybe all), but I don't think we should force that on an implementation.  I feel users should clean up after themselves; any good programmer already does.  Also, this requirement would mean that a buffer allocated with MPI_MEM_ALLOC is freed at MPI_FINALIZE.  Do we really want to do that?
> 
The example about MPI_MEM_ALLOC is not new -- it has been in the standard for a while. The text is new. I look for consistency. OK with me to say that the user should free all MPI objects (or else suffer memory leaks)-- i.e. MPI does not promise to free any resource explicitly allocated by MPI calls -- only internally used resources not visible to the programmer; OK to say MPI should clean up. Just need to make a choice (and make sure there is a "free" for each "alloc", if we make the 1st choice.).
> -d
> 
> On Feb 1, 2012, at 8:33 PM, Marc Snir wrote:
> 
>> <init-finalize.pdf>_______________________________________________
>> Mpi3-hybridpm mailing list
>> Mpi3-hybridpm at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
> 
> 
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm





More information about the mpiwg-hybridpm mailing list