[mpi-21] MPI_FINALIZE

mailman-bounces_at_[hidden] mailman-bounces at [hidden]
Tue Jan 29 10:38:31 CST 2008


From: Richard Treumann <treumann_at_[hidden]>
Date: January 29, 2008 10:38:02 AM CST
To: "Mailing list for discussion of MPI 2.1" <mpi-21_at_[hidden]>
Subject: Re: [mpi-21] MPI_FINALIZE

I do not see a need for clarification.

As long as somewhere in the standard the following points are clear:
1) Only MPI_BARRIER promises barrier behavior
2) Any collective may be implemented with barrier behavior as a side  
effect

(performance issues make some collectives (eg. MPI_Bcast) unlikely to  
be like a barrier like but the standard does not rule it out)

Dick

Dick Treumann - MPI Team/TCEM
IBM Systems & Technology Group
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

 > Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner,  
Nicholas Nevin,
 >           Bill Gropp, Dick Treumann
 >
 > This is a follow up to:
 >   MPI_FINALIZE in MPI-2 (with spawn)
 >   in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
 > errata/index.html
 > with mail discussion in
 >   http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
 > errata/discuss/finalize/
 > _________________________________________
 >
 > When I understand correctly, then a clarification is not needed
 > because the MPI standard expresses all, i.e.
 > - MPI_Finalize need not to behave like a barrier
 > - but is allowed to have a barrier inside.
 > - If the user wants to exit one spawned process while
 >   the others still continue to work, he/she must
 >   disconnect this process before calling MPI_Finalize on it.
 >
 > If somebody wants a clarification to be included into the standard
 > and therefore in Ballot 4, please send me your wording
 > with the page and line references included.
 >
 > If all agree, that no clarification is needed, then I would finish
 > this track.
 >
 > Best regards
 > Rolf
 >
 >
 > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
 > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
 > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
 > _______________________________________________
 > mpi-21 mailing list
 > mpi-21_at_[hidden]
 > http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21





* 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-21/attachments/20080129/6c5349b5/attachment.html>


More information about the Mpi-21 mailing list