[mpi-21] MPI_FINALIZE
Richard Treumann
treumann at [hidden]
Tue Jan 29 11:27:12 CST 2008
I do not see a need for clarification.
As long as somewhere in the standard the following points are clear enough:
1) Only MPI_BARRIER promises barrier behavior
2) Any collective may be implemented with barrier behavior as a side effect
(performance issues make some collectives (eg. MPI_Bcast) unlikely to be
barrier like. An MPI_Allreduce will always be barrier like. Either way
though, the standard does not stipulate)
Dick Treumann - MPI Team/TCEM
IBM Systems & Technology Group
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
mpi-21-bounces_at_[hidden] wrote on 01/28/2008 12:52:54 PM:
> Mainly to Jeff Squyres, Hubert Ritzdorf, Rolf Rabenseifner, Nicholas
Nevin,
> Bill Gropp, Dick Treumann
>
> This is a follow up to:
> MPI_FINALIZE in MPI-2 (with spawn)
> in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> errata/index.html
> with mail discussion in
> http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> errata/discuss/finalize/
> _________________________________________
>
> When I understand correctly, then a clarification is not needed
> because the MPI standard expresses all, i.e.
> - MPI_Finalize need not to behave like a barrier
> - but is allowed to have a barrier inside.
> - If the user wants to exit one spawned process while
> the others still continue to work, he/she must
> disconnect this process before calling MPI_Finalize on it.
>
> If somebody wants a clarification to be included into the standard
> and therefore in Ballot 4, please send me your wording
> with the page and line references included.
>
> If all agree, that no clarification is needed, then I would finish
> this track.
>
> Best regards
> Rolf
>
>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> _______________________________________________
> mpi-21 mailing list
> mpi-21_at_[hidden]
> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-21/attachments/20080129/3e459105/attachment.html>
More information about the Mpi-21
mailing list