<br><font size=2 face="sans-serif">If there is any question about whether
these calls are still valid after an error with an error handler that returns
(MPI_ERRORS_RETURN or user handler) </font>
<br>
<br><font size=2 face="sans-serif">MPI_Abort,</font>
<br><font size=2 face="sans-serif">MPI_Error_string</font>
<br><font size=2 face="sans-serif">MPI_Error_class</font>
<br>
<br><font size=2 face="sans-serif">I assume it should be corrected as a
trivial oversight in the original text.</font>
<br>
<br><font size=2 face="sans-serif"> I would regard the real issue
as being the difficulty with assuring the state of remote processes. </font>
<br>
<br><font size=2 face="sans-serif">There is huge difficulty in making any
promise about how an interaction between a process that has not taken an
error and one that has will behave. </font>
<br>
<br><font size=2 face="sans-serif">For example, if there were a loop of
100 MPI_Bcast calls and on iteration 5, rank 3 uses a bad communicator,
what is the proper state? Either a sequence number is mandated so
the other ranks hang quickly or a sequence number is prohibited so everybody
keeps going until the "end" when the missing MPI_Bcast becomes
critical. Of course, with no sequence number, some tasks are stupidly
using the iteration n-1 data for their iteration n computation.</font>
<br>
<br><font size=2 face="sans-serif"> </font>
<br>
<br>
<br>
<br>
<br><font size=2 face="sans-serif">Dick Treumann - MPI Team
<br>
IBM Systems & Technology Group<br>
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601<br>
Tele (845) 433-7846 Fax (845) 433-8363<br>
</font>