[Mpi3-ft] simplified FT proposal

Graham, Richard L. rlgraham at ornl.gov
Mon Jan 16 18:39:34 CST 2012


Before commenting in detail, what has been proposed is the result of quite a bit of work to figure out what apps want or need over about a 4 year period (with about 1 year hiatus), so quite a bit of work has been done to get input from various groups, and the resulting proposal reflects this.  As a standard, MPI must satisfy the broad set of users, and not select a subset to support.

There are two classes of error notification approaches:
  - notify only those directly affected by the the failure
  - notify a broader set of process - user specified.  This is the failure handler concept added recently, that has been demonstrated to be useful in the context of linear FT linear algebra.

For the first cast the principals are:
  - only those directly affected by the errors are notified, via a return error code.  For collective communication, this implies that anyone using collective operations is notified after failure occurs.  When wild card receives are posted, the library can't guess if there is a problem or not, so as long as there is no communication to process, some sort of notification needs to occur, to avoid deadlock.
   - Collective communications are expensive, so, under the assumption that failures are rare relative to the number of collective calls, validate is used to amortize the cost of collective integrity checks over several iterations, guided by the user.  There has ALWAYS been the intent to let user ask to use more robust collectives, but this just did not make it into this version of the proposal.

So, there are actually a very small number of principals, but a large standard, so need to deal with the implications throughout the entire standard ...

More specific comments in line.


On Jan 13, 2012, at 4:41 PM, Sur, Sayantan wrote:

FT WG,

I would like to thank Josh for enduring the marathon plenary presentation! It was truly commendable.

Based on the Forum feedback and vote, it is apparent that there are some significant issues. Primarily due to several new concepts and terms, that the larger Forum does not believe to be required, OR present implementation challenges for the rest of MPI library.

I would like to argue for a simplified version of the proposal that covers a large percentage of use-cases and resists adding new “features” for the full-range of ABFT techniques. It is good if we have a more pragmatic view and not sacrifice the entire FT proposal for the 1% fringe cases. Most apps just want to do something like this:

for(… really long time …) {
   MPI_Comm_check(work_comm, &is_ok, &alive_group);
   if(!is_ok) {
       MPI_Comm_create_group(alive_group, …, &new_comm);
      // re-balance workload and use new_comm in rest of computation
       MPI_Comm_free(work_comm); // get rid of old comm
       work_comm = new_comm;
   } else {
     // continue computation using work_comm
     // if some proc failed in this iteration, roll back work done in this iteration, go back to loop
   }
}

[rich] Actually, this is only one use scenario, and one we deliberately did not want to require folks use.  There is a specific goal to allow completely local recovery for those apps that do not need global consistency.  This was one of the very early requirements.  This is where I repeat that MPI needs to support a range of application scenarios, as there is not a single application style in use.



Here are some modifications I would like to propose to the current chapter (in order as these concepts/terms appear in the text):

1.       Remove concept of “recognized failed” processes. As was pointed out in the meeting, we don’t really care about the failed processes, rather the alive ones. Accordingly, rename MPI_Comm(win/file)_validate() to MPI_Comm(win/file)_check(MPI_Comm comm, int * is_ok, MPI_Group * alive_group);

[rich] It probably depends on the app, as to whether or not it cares about the failed processes.  In a client/server scenario, one cares a lot if the failure is in the server, or one of the clients.

2.       Remove concept of “collectively inactive/active”.  This doesn’t really bring anything to the table, rather conflicts with existing definition of collectives. MPI defines collectives as being equivalent of a series of point-to-point calls. As per that definition, if the point-to-point calls succeed (i.e. the corresponding processes are alive), then as locally observed, collective call has also succeeded. As far as the application is concerned as long as the local part of collective is complete successfully, it is OK. If they want to figure out global status, they can always call MPI_Comm_check() or friends.

[rich] your point is correct - MPI currently defines the collective over the entire set of ranks, so to proceed with a smaller set of ranks, the apps need to be involved - only they know if this has any value.  The terms "collectively inactive/active" are a convenient short hand to express whether or not collective communication can be used - rather than going through a longer explanation every time this concept is mentioned in the text.  I suppose that your assertion is that collectives are useless if not all ranks participate - however this is contrary  to how some apps use collectives.  There is no one-size fits all here.

3.       Eventually perfect failure detector/strongly complete/strongly accurate/etc: We replace this discussion (even remove much of 17.3) with a much more straight-forward requirement – “Communication with a process completes with either success or error.

[rich] So, Josh actually did quite a bit of work to ground what we are doing in the broader context of FT.  This actually conveys very specific requirements on the implementation - error notification must occur ...

In case of communication with failed processes, communication calls and requests may complete with MPI_ERR_PROC_FAILSTOP.” Note that MPI standard requires all communication to complete before calling MPI_Finalize – therefore, the first part of this requirement is nothing new. The second part indicates that there is no guarantee that communication with a failed process *will* fail. Messages may have been internally buffered before the real failure may still be delivered per existing MPI semantics.

[rich] MPI completion semantics are not changed at all with this.

a.       This does raise the question from implementers: “When do I mark requests as MPI_ERR_PROC_FAILSTOP? How long do I wait?” The answer completely depends on the implementation. Obviously, there is some requirement to deal with process launcher runtime. In some implementations with connected mode may be able to leverage hw or os techniques to detect connections that have gone down. MPI implementations using connection-less transports may need additional work. However, *none* of this is new work/concepts. As far as possible, we should talk minimally about what the MPI implementation might do to achieve this.

[rich] I do not believe any implementation details are specified.  What is specified is that error notification must eventually occur.  The implication is that this is in a finite period of time, but does not specify what this period is.

4.       Remove process failure handlers – 17.5.1, 17.5.2, 17.5.3, 17.5.4. The only way to find out if something failed is to call MPI_Comm_check() and friends. This removes a whole lot of complexity with failure handlers. Fail handlers can be emulated over this interface as a library. We may consider them for MPI-3.1 (or 4).

[rich] Why remove support of a class of applications that has already used this approach ?  This was added very recently to support the broader range of application scenarios, rather than focus on the first type of error notification.

5.       Point-to-point communication: Remove the concept of MPI_ERR_ANY_SOURCE_DISABLED and corresponding calls to re-enable any source. The concept of disabling ANY_SOURCE is counter-intuitive. When an app/lib posts a recv with ANY_SOURCE, it is specifically telling the MPI library that *any* source is OK and implicitly means that if some senders are unable to send, application/lib does not care!

[rich] this is actually wrong.  posting an any-source request says that data from any source may be matched, but says nothing at all about what to do if failure occurs.  I have direct experience with apps written on the order of ten years ago, where all data was received as any-source, so a process failure with no means of notification on failure would cause such apps to hang.  We did feel that if an error occurred after an any-source was posted, that the user should have input as to whether or not to continue.

Master/slave type of applications wishing to use FT features can periodically call MPI_Comm_check(). Additionally, if the master tries to send to the dead process, it may get an error. My guess is that master/slave type of apps are among the most resilient, and some even work with the current standard (MPI_ERRORS_RETURN). A benefit of removing this restriction is that we no longer have the threading complexities of re-enabling any source using reader/writer locks :) Therefore, we can remove 17.6.3.
6.       Retain MPI_Comm_drain() and MPI_Comm_idrain() as they provide useful functionality.
7.       Collective communication: Rename comm_validate() to comm_check() as per discussion above. We can keep comm_check_multiple() as it provides useful functionality for overlapping communicators by reducing overhead to check them. We can retain much of 17.7.2 while removing references to “collectively inactive”. If the output of collective depends on contribution from a failed process, then obviously, the collective fails. This is in keeping with point-to-point semantics – one cannot receive any data from a failed process. Keep in mind if the contribution from failed process may have arrived before it failed – and that is OK (not flagged as failure). Some collectives, such as MPI_Bcast, may succeed even if processes down the bcast tree have failed as sends may simply be buffered. The app/lib will only know if a collective was a global success by either performing an Allreduce after the collective OR calling comm_check(). In any case, it is left to app/lib and not MPI to report failures of processes the library didn’t try to communicate with during this op.

[rich] The library generally has much better access to error information, so it makes sense to get help from the library on this.  Many apps expect collectives to complete over the entire communicator, so if this is not going to be the case,  apps need to opt in to use these.

8.       I am proposing that once a collective fails with MPI_ERR_PROC_FAIL_STOP, all subsequent collectives on that comm fail immediately with MPI_ERR_PROC_FAIL_STOP. App/lib needs to use MPI_Comm_create_group() to fork off a new comm of live procs and continue with it. This is a deviation from the current proposal that allows collectives on bad comms (after re-enabling collectives) and keeps 0s as contributions. I am aware that this might not fully satisfy all use cases (although at this point of time, I cannot think of any), but in a broader view, we could think this of as a compromise to reduce complexity.

[rich] nothing prevents apps from doing this.  But there are apps that don't need this, so why add extra work for them ?

9.       Example 17.4 changes only slightly to call comm_check() and then split off the new communicator. Why keep failed procs in the communicator anyways?
10.   Note that this change in semantics allows us to bypass the question raised: “Why does comm_size() on a communicator with failed procs still return the old value- alive_ranks + failed_ranks?” As I mentioned before, this is odd, and we should encourage app/lib to only deal with known alive ranks. The current proposal does the reverse – forces app to keep track of “known failed”. This causes confusion!

[rich] The choice to keep mpi_comm_size/rank() returning the same value, regardless of the state of the communicator is to avoid forcing ranks unaffected by the failure to have to do something when failure happens.

Rich

11.   Process topologies 17.9 – should change to say that we can only use communicators with live ranks. i.e. if you know your comm was bad, split off a new comm with live ranks. During the op, some ranks may fail – and that is OK since MPI_ERR_PROC_FAIL_STOP will be raised. This is mentioned in the current proposal.
12.   Similar changes in semantics to windows and files.

Please let me know if I overlooked some corner cases or I have mis-interpreted the text of the current chapter. I gave it some thought, but WG knows best!


Thanks!

===
Sayantan Sur, Ph.D.
Intel Corp.

_______________________________________________
mpi3-ft mailing list
mpi3-ft at lists.mpi-forum.org<mailto:mpi3-ft at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft





More information about the mpiwg-ft mailing list