[Mpi3-ft] simplified FT proposal

George R Carr Jr george.carr at noaa.gov
Tue Jan 17 15:49:26 CST 2012


Hello,

I'm not sure that this really covers the use case 
I presented at the July, 2011 meeting. Mine is a 
fault tolerant case (not fault recovery) for a 
semi-real-timeish processing situation. The 
application must conclude within a max wall clock 
limit. Think of this as having multiple 
processing components, not all of which need 
complete for the result to be useful.

Testing at the beginning of an operation is not 
sufficient. The kind of process I have 
implemented (not in MPI) is:

for (... really long time ... ) {
   communicate-nowait
   if no ack/completion in set time then declare 
failure of that other component{
       test communicator(s) affected, checking for component failures
       if (continuation makes sense){
            rebuild affected communicator(s)
       else
            exit
       }
    }
}

The timeout feature is critical.

Regards,
George


>Content-Language: en-US
>Content-Type: multipart/alternative;
> 
>	boundary="_000_CE01B5B59D416C4BA3228140336D1DE401367BORSMSX103amrcorpi_"
>
>FT WG,
>
>I would like to thank Josh for enduring the 
>marathon plenary presentation! It was truly 
>commendable.
>
>Based on the Forum feedback and vote, it is 
>apparent that there are some significant issues. 
>Primarily due to several new concepts and terms, 
>that the larger Forum does not believe to be 
>required, OR present implementation challenges 
>for the rest of MPI library.
>
>I would like to argue for a simplified version 
>of the proposal that covers a large percentage 
>of use-cases and resists adding new "features" 
>for the full-range of ABFT techniques. It is 
>good if we have a more pragmatic view and not 
>sacrifice the entire FT proposal for the 1% 
>fringe cases. Most apps just want to do 
>something like this:
>
>for(Š really long time Š) {
>    MPI_Comm_check(work_comm, &is_ok, &alive_group);
>    if(!is_ok) {
>        MPI_Comm_create_group(alive_group, Š, &new_comm);
>       // re-balance workload and use new_comm in rest of computation
>        MPI_Comm_free(work_comm); // get rid of old comm
>        work_comm = new_comm;
>    } else {
>      // continue computation using work_comm
>      // if some proc failed in this iteration, 
>roll back work done in this iteration, go back 
>to loop
>    }
>}
>
>Here are some modifications I would like to 
>propose to the current chapter (in order as 
>these concepts/terms appear in the text):
>
>1.       Remove concept of "recognized failed" 
>processes. As was pointed out in the meeting, we 
>don't really care about the failed processes, 
>rather the alive ones. Accordingly, rename 
>MPI_Comm(win/file)_validate() to 
>MPI_Comm(win/file)_check(MPI_Comm comm, int * 
>is_ok, MPI_Group * alive_group);
>2.       Remove concept of "collectively 
>inactive/active".  This doesn't really bring 
>anything to the table, rather conflicts with 
>existing definition of collectives. MPI defines 
>collectives as being equivalent of a series of 
>point-to-point calls. As per that definition, if 
>the point-to-point calls succeed (i.e. the 
>corresponding processes are alive), then as 
>locally observed, collective call has also 
>succeeded. As far as the application is 
>concerned as long as the local part of 
>collective is complete successfully, it is OK. 
>If they want to figure out global status, they 
>can always call MPI_Comm_check() or friends.
>3.       Eventually perfect failure 
>detector/strongly complete/strongly 
>accurate/etc: We replace this discussion (even 
>remove much of 17.3) with a much more 
>straight-forward requirement - "Communication 
>with a process completes with either success or 
>error. In case of communication with failed 
>processes, communication calls and requests may 
>complete with MPI_ERR_PROC_FAILSTOP." Note that 
>MPI standard requires all communication to 
>complete before calling MPI_Finalize - 
>therefore, the first part of this requirement is 
>nothing new. The second part indicates that 
>there is no guarantee that communication with a 
>failed process *will* fail. Messages may have 
>been internally buffered before the real failure 
>may still be delivered per existing MPI 
>semantics.
>a.       This does raise the question from 
>implementers: "When do I mark requests as 
>MPI_ERR_PROC_FAILSTOP? How long do I wait?" The 
>answer completely depends on the implementation. 
>Obviously, there is some requirement to deal 
>with process launcher runtime. In some 
>implementations with connected mode may be able 
>to leverage hw or os techniques to detect 
>connections that have gone down. MPI 
>implementations using connection-less transports 
>may need additional work. However, *none* of 
>this is new work/concepts. As far as possible, 
>we should talk minimally about what the MPI 
>implementation might do to achieve this.
>4.       Remove process failure handlers - 
>17.5.1, 17.5.2, 17.5.3, 17.5.4. The only way to 
>find out if something failed is to call 
>MPI_Comm_check() and friends. This removes a 
>whole lot of complexity with failure handlers. 
>Fail handlers can be emulated over this 
>interface as a library. We may consider them for 
>MPI-3.1 (or 4).
>5.       Point-to-point communication: Remove 
>the concept of MPI_ERR_ANY_SOURCE_DISABLED and 
>corresponding calls to re-enable any source. The 
>concept of disabling ANY_SOURCE is 
>counter-intuitive. When an app/lib posts a recv 
>with ANY_SOURCE, it is specifically telling the 
>MPI library that *any* source is OK and 
>implicitly means that if some senders are unable 
>to send, application/lib does not care! 
>Master/slave type of applications wishing to use 
>FT features can periodically call 
>MPI_Comm_check(). Additionally, if the master 
>tries to send to the dead process, it may get an 
>error. My guess is that master/slave type of 
>apps are among the most resilient, and some even 
>work with the current standard 
>(MPI_ERRORS_RETURN). A benefit of removing this 
>restriction is that we no longer have the 
>threading complexities of re-enabling any source 
>using reader/writer locks J Therefore, we can 
>remove 17.6.3.
>6.       Retain MPI_Comm_drain() and 
>MPI_Comm_idrain() as they provide useful 
>functionality.
>7.       Collective communication: Rename 
>comm_validate() to comm_check() as per 
>discussion above. We can keep 
>comm_check_multiple() as it provides useful 
>functionality for overlapping communicators by 
>reducing overhead to check them. We can retain 
>much of 17.7.2 while removing references to 
>"collectively inactive". If the output of 
>collective depends on contribution from a failed 
>process, then obviously, the collective fails. 
>This is in keeping with point-to-point semantics 
>- one cannot receive any data from a failed 
>process. Keep in mind if the contribution from 
>failed process may have arrived before it failed 
>- and that is OK (not flagged as failure). Some 
>collectives, such as MPI_Bcast, may succeed even 
>if processes down the bcast tree have failed as 
>sends may simply be buffered. The app/lib will 
>only know if a collective was a global success 
>by either performing an Allreduce after the 
>collective OR calling comm_check(). In any case, 
>it is left to app/lib and not MPI to report 
>failures of processes the library didn't try to 
>communicate with during this op.
>8.       I am proposing that once a collective 
>fails with MPI_ERR_PROC_FAIL_STOP, all 
>subsequent collectives on that comm fail 
>immediately with MPI_ERR_PROC_FAIL_STOP. App/lib 
>needs to use MPI_Comm_create_group() to fork off 
>a new comm of live procs and continue with it. 
>This is a deviation from the current proposal 
>that allows collectives on bad comms (after 
>re-enabling collectives) and keeps 0s as 
>contributions. I am aware that this might not 
>fully satisfy all use cases (although at this 
>point of time, I cannot think of any), but in a 
>broader view, we could think this of as a 
>compromise to reduce complexity.
>9.       Example 17.4 changes only slightly to 
>call comm_check() and then split off the new 
>communicator. Why keep failed procs in the 
>communicator anyways?
>10.   Note that this change in semantics allows 
>us to bypass the question raised: "Why does 
>comm_size() on a communicator with failed procs 
>still return the old value- alive_ranks + 
>failed_ranks?" As I mentioned before, this is 
>odd, and we should encourage app/lib to only 
>deal with known alive ranks. The current 
>proposal does the reverse - forces app to keep 
>track of "known failed". This causes confusion!
>11.   Process topologies 17.9 - should change to 
>say that we can only use communicators with live 
>ranks. i.e. if you know your comm was bad, split 
>off a new comm with live ranks. During the op, 
>some ranks may fail - and that is OK since 
>MPI_ERR_PROC_FAIL_STOP will be raised. This is 
>mentioned in the current proposal.
>12.   Similar changes in semantics to windows and files.
>
>Please let me know if I overlooked some corner 
>cases or I have mis-interpreted the text of the 
>current chapter. I gave it some thought, but WG 
>knows best!
>
>
>Thanks!
>
>===
>Sayantan Sur, Ph.D.
>Intel Corp.
>
>
>_______________________________________________
>mpi3-ft mailing list
>mpi3-ft at lists.mpi-forum.org
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft


-- 
George R Carr Jr
U.S. Department of Commerce
NOAA/OAR/GSD, DSRC R/GSD2, 2B142
Tel (best): 303-325-3334
Tel (NOAA): 303-497-4714
Email: George.Carr at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-ft/attachments/20120117/270028c8/attachment-0001.html>


More information about the mpiwg-ft mailing list