The failure happened in MPI_COMM_WORLD. I assume that for this case, the fault tolerance strategy of library A is based on collective repair. Do you propose to forbid using collective repairs if the user uses such communication scheme?<div>
<br></div><div>How do you propose to deal with the following example:</div><div><br></div><div><span class="Apple-style-span" style="font-family: Helvetica; font-size: 12px; "><div>MPI_COMM_WORLD is the communicator used in library A</div>
<div>MPI_COMM_2 is the communicator used in library B</div><div><br></div><div>rank 0: belongs to MPI_COMM_WORLD *and MPI_COMM_2*</div><div> -> in library A: do some computation and communications that succeed</div><div>
-> in library A: computes a reversible checksum </div><div> -> in library A: continue computation, calling other libraries often</div><div> -> then, at some point in time:</div><div> -> in library A: MPI_Send(MPI_COMM_WORLD, dst=1);</div>
<div> -> crashes </div><div> -> would have entered library B and here do no communications at this step</div><div><br></div><div>rank 1: belongs to MPI_COMM_WORLD and MPI_COMM_2</div><div> -> in library A: do some computation and communications that succeed</div>
<div> -> in library A: computes a reversible checksum </div><div> -> in library A: continue computation, calling other libraries often</div><div> -> then, at some point in time, </div><div> -> in library A: MPI_Recv(MPI_COMM_WORLD, src=0)</div>
<div> -> detects the failure</div><div> -> calls the error manager: collective repair</div><div> -> would have entered library B and called: MPI_Send(MPI_COMM_2, dst=2);</div><div><br></div><div>rank 2: belongs to MPI_COMM_WORLD and MPI_COMM_2</div>
<div> -> in library A: do some computation and communications that succeed</div><div> -> in library A: computes a reversible checksum</div><div> -> in library A: continue computation, calling other libraries often</div>
<div> -> then, at some point in time,</div><div> -> in library A: does not have to communicate at this step</div><div> -> in library B: MPI_Recv(MPI_COMM_2, src=1);<br></div><div> -> will never succeed</div>
<div><br></div><div>To inverse the reversible checksum, the rank 2 needs to enter the collective repair, in order to give to the new rank 0 the data that rank 0 lost with the crash. It is not because the library uses a collective approach for tolerating failures that it needs to do synchronizations all the time when it calls other libraries: if library A highly depends on lower-level libraries, it does not want to synchronize each time it is going to make a call, but it is OK to synchronize for time to time to compute the reversible checksum.</div>
<div><br></div><div>Thomas</div></span></div><div><br></div><div><div class="gmail_quote">2009/2/10 Erez Haba <span dir="ltr"><<a href="mailto:erezh@microsoft.com">erezh@microsoft.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div lang="EN-US" link="blue" vlink="purple">
<div>
<p><span style="font-size:11.0pt;color:#1F497D">Don't' do collective repair in rank 1. Do a
non-collective repair (rank 1 does not require the participation of rank 2 to
recover rank 0)</span></p>
<p><span style="font-size:11.0pt;color:#1F497D"> </span></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p><b><span style="font-size:10.0pt">From:</span></b><span style="font-size:10.0pt"> <a href="mailto:mpi3-ft-bounces@lists.mpi-forum.org" target="_blank">mpi3-ft-bounces@lists.mpi-forum.org</a>
[mailto:<a href="mailto:mpi3-ft-bounces@lists.mpi-forum.org" target="_blank">mpi3-ft-bounces@lists.mpi-forum.org</a>] <b>On Behalf Of </b>Thomas Herault<br>
<b>Sent:</b> Monday, February 09, 2009 4:47 PM<br>
<b>To:</b> MPI 3.0 Fault Tolerance and Dynamic Process Control working Group<br>
<b>Subject:</b> [Mpi3-ft] Point2Point issue scenario with synchronous
notification based on calling communicator only.</span></p>
</div><div><div></div><div class="Wj3C7c">
<p> </p>
<div>
<p>Hi list,</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>with the help of others, here is an adaptation of the
"counter-example" based on p2p communications only.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>MPI_COMM_WORLD is the communicator used in library A</p>
</div>
<div>
<p>MPI_COMM_2 is the communicator used in library B</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>rank 0: belongs to MPI_COMM_WORLD only</p>
</div>
<div>
<p> -> in library A: MPI_Send(MPI_COMM_WORLD,
dst=1);</p>
</div>
<div>
<p> -> crashes </p>
</div>
<div>
<p> </p>
</div>
<div>
<p>rank 1: belongs to MPI_COMM_WORLD and MPI_COMM_2</p>
</div>
<div>
<p> -> in library A: MPI_Recv(MPI_COMM_WORLD,
src=0)</p>
</div>
<div>
<p> -> detects the failure</p>
</div>
<div>
<p> -> calls the error manager: collective
repair</p>
</div>
<div>
<p> -> would have entere library B and called:
MPI_Send(MPI_COMM_2, dst=2);</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>rank 2: belongs to MPI_COMM_WORLD and MPI_COMM_2</p>
</div>
<div>
<p> -> does nothing in library A except entering
library B.</p>
</div>
<div>
<p> -> in library B: MPI_Recv(MPI_COMM_2, src=1);</p>
</div>
<div>
<p> -> will never succeed</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>I understand from the discussion we had that a solution
would be to validate COMM_WORLD for process 2 before entering library 2. I
agree with that, but would like you to consider that it virtually means that we
ask users to call a $n^2$ communications operation before any call of any
function of any library (and possibly at the return of calls) if they want to
use collective repairs. I would advocate studying a less performance-killer
approach, where errors of any communicator would be notified in any MPI call.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Bests,</p>
</div>
<div>
<p>Thomas</p>
</div>
</div></div></div>
</div>
<br>_______________________________________________<br>
mpi3-ft mailing list<br>
<a href="mailto:mpi3-ft@lists.mpi-forum.org">mpi3-ft@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft</a><br>
<br></blockquote></div><br></div>