[Mpi3-ft] MPI Fault Tolerance scenarios
Erez Haba
erezh at MICROSOFT.com
Fri Mar 6 19:39:54 CST 2009
While putting together the next scenario I realized that it would be easy enough to modify the worker code in scenario #1 to enable master fault-tolerance.
Pseudo code for the hardened worker:
int main()
{
MPI_Init()
>>> MPI_Comm_set_errhandler(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
for(;;)
{
rc = MPI_Recv(src=0, &query, MPI_COMM_WORLD);
>>> if(rc != MPI_SUCCESS)
>>> {
>>> rc = MPI_Comm_Restart_rank(MPI_COMM_WORLD, 0);
>>> if(rc == MPI_SUCCESS)
>>> continue;
>>>
>>> exit(1);
>>> }
if(is_done_msg(query))
break;
process_query(&query, &answer);
MPI_Send(dst=0, &answer, MPI_COMM_WORLD);
}
MPI_Finalize()
}
This change seems easy enough... Note that we need to check the error code only after receive and not after send; the assumption is that if send failed so will receive.
other assumptions:
- eventually all ranks will detect that the master failed and will call restart_rank
o however they do not start another copy, but only block until the process is started.
- Any outstanding message sent from the worker to the master after it failed is flushed out by the mpi implementation (as most impl do today).
- Use exit(1) if restart_rank fails (rather than MPI_Abort) for the case where this rank could not start the master but other ranks could; in that case the master will restart the failing rank. Calling MPI_Abort will abort the entire job. (hmmm... possibly calling MPI_Abort(MPI_COMM_SELF) might be okay).
Now, what is the point of restarting the master if all messages get flushed and there isn't actually any state held by any rank??
Well, the first thing that comes to mind is the startup time of a large job. You would need to restart all processes on all nodes. The other is resource allocation on a batch system. if the job gives up the resources just to be able to restart immediately; the batch system might (a) not have the resources available immediately as other jobs are running and expanding (b) queue the job to restart for a later time because there are other jobs in the queue already.
So in this case the reason to implement master recovery would be latency to completion in case of a failure.
Another reason would be perception, not to bother the user with the indication that a job failed.
What do you think?
Thanks,
.Erez
From: Erez Haba
Sent: Wednesday, February 25, 2009 3:21 PM
To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
Subject: RE: MPI Fault Tolerance scenarios
Update the wiki page code to simplifies how the code reads (the error case after MPI_Waitany)
https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/ft/scenarios_and_solutions
From: mpi3-ft-bounces at lists.mpi-forum.org [mailto:mpi3-ft-bounces at lists.mpi-forum.org] On Behalf Of Erez Haba
Sent: Wednesday, February 25, 2009 10:06 AM
To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
Subject: Re: [Mpi3-ft] MPI Fault Tolerance scenarios
Thanks Greg for catching this. I fixed setting the 'repairing[i] = false' in the example below and on the wiki page.
Added the lines
>>> else
>>> {
>>> repairing[i] = false;
>>> }
From: mpi3-ft-bounces at lists.mpi-forum.org [mailto:mpi3-ft-bounces at lists.mpi-forum.org] On Behalf Of Erez Haba
Sent: Wednesday, February 18, 2009 12:04 PM
To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
Subject: Re: [Mpi3-ft] MPI Fault Tolerance scenarios
I've posted this scenario on the FT wiki pages
https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/ft/scenarios_and_solutions
From: mpi3-ft-bounces at lists.mpi-forum.org [mailto:mpi3-ft-bounces at lists.mpi-forum.org] On Behalf Of Erez Haba
Sent: Tuesday, February 17, 2009 6:53 PM
To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
Subject: [Mpi3-ft] MPI Fault Tolerance scenarios
Hello all,
In our last meeting we decided to build a set of FT scenarios/programs to help us understand the details of the interface need to support those scenarios. We also decided to start with very simple scenarios and add more complex ones as we understand the former better. I hope that starting with simple scenarios will help us build a solid foundation on which we can build the more complex solutions.
When we build an FT solution we will focus on the scenario as described, without complicating the solution just because it would be needed later for a more complex one. The time will come later to modify the solution as we acquire more knowledge and built the foundations. Hence, any proposal or change that we make needs to fit exactly the scenario (and all those that we previously looked at) but no more.
For example in the first scenario that we'll look at there is no need for saving communicator state or error callback; but they might be required later.
Note that these scenarios focus on process FT rather than checkpoint/restart or network degradation. I assume we'll do the latter later.
Scenario #1: Very Simple Master-Workers
Description
This is a very simple master-workers scenario. However simple, we were asked many times by customers to support FT in this scenario.
In this case the MPI application running with n processes, where rank 0 is used as the master and n-1 ranks are used as workers. The master generates work (either by getting it directly from user input, or reading a file) and sends it for processing to a free worker rank. The master sends requests and receives replies using MPI point-to-point communication. The workers wait for the incoming message, upon arrival the worker computes the result and sends it back to the master. The master stores the result to a log file.
Hardening: The goal is to harden the workers, the master itself is not FT, thus if it fails the entire application fails. In this case the workers are FT, and are replaced to keep computation power for this application. (a twist: if a worker cannot be recovered the master can work with a smaller set of clients up to a low watermark).
Worker
The worker waits on a blocking receive when a message arrives it process it. If a done message arrives the worker finalizes MPI and exit normally.
Hardening: There is not special requirement for hardening here. If the worker encounters a communication problem with the master, it means that the master is down and it's okay to abort the entire job. Thus, it will use the default error handler (which aborts on errors). Note that we do not need to modify the client at all to make the application FT (except the master).
Pseudo code for the hardened worker:
int main()
{
MPI_Init()
for(;;)
{
MPI_Recv(src=0, &query, MPI_COMM_WORLD);
if(is_done_msg(query))
break;
process_query(&query, &answer);
MPI_Send(dst=0, &answer, MPI_COMM_WORLD);
}
MPI_Finalize()
}
Notice that for this FT code there is no requirement for the worker to rejoin the comm. As the only communicator used is MPI_COMM_WORLD.
Master
The master code reads queries from a stream and passes them on to the workers to process. The master goes through several phases. In the initialization phase it sends the first request to each one of the ranks; in the second one it shuts down any unnecessary ranks (if the job is too small); I the third phase it enters its progress engine where it handles replies (answers), process recovery and termination (on input end).
Hardening: It is the responsibility of the master to restart any failing workers and make sure that the request (query) did not get lost if a worker fails. Hence, every time an error is detected the master will move the worker into repairing state and move its workload to other workers.
The master runs with errors returned rather than aborted
One thing to note about the following code: it is not optimized. I did not try to overlap computation with communication (which is possible) I tried to keep it as simple as possible for the purpose of discussion.
Pseudo code for the hardened master; the code needed for repairing the failed ranks is highlighted in yellow.
int main()
{
MPI_Init()
>>> MPI_Comm_set_errhandler(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
MPI_Comm_size(MPI_COMM_WORLD, &n);
MPI_Request r[n] = MPI_REQUEST_NULL;
QueryMessage q[n];
AnswerMessage a[n];
int active_workers = 0;
>>> bool repairing[n] = false;
//
// Phase 1: send initial requests
//
for(int i = 1; i < n; i++)
{
if(get_next_query(stream, &q[i]) == eof)
break;
active_workers++;
MPI_Send(dest=i, &q[i], MPI_COMM_WORLD);
rc = MPI_Irecv(src=i, buffer=&a[x], request=&r[x], MPI_COMM_WORLD)
>>> if(rc != MPI_SUCCESS)
>>> {
>>> start_repair(i, repairing, q, a, r, stream);
>>> }
}
//
// Phase 2: finalize any unnecessary ranks
//
for(int i = active_workers + 1; i < n; i++)
{
MPI_Send(dest=i, &done_msg, MPI_COMM_WORLD);
}
//
// The progress engine. Get answers; send new requests and handle
// process repairs
//
while(active_workers != 0)
{
rc = MPI_Waitany(n, r, &i, MPI_STATUS_IGNORE);
>>> if(!repairing[i])
>>> {
>>> if(rc != MPI_SUCCESS)
>>> {
>>> start_repair(i, repairing, q, a, r, stream)
>>> continue;
>>> }
process_answer(&a[i]);
>>> }
>>> else if(rc != MPI_SUCCESS)
>>> {
>>> active_workers--;
>>> }
>>> else
>>> {
>>> repairing[i] = false;
>>> }
if(get_next_input(stream, &q[i]) == eof)
{
active_workers--;
MPI_Send(dest=i, &done_msg)
{
else
{
MPI_Send(dest=i, &q[i])
rc = MPI_Irecv(src=i, buffer=&a[i], request=&r[i], MPI_COMM_WORLD)
>>> if(rc != MPI_SUCCESS)
>>> {
>>> start_repair(i, repairing, q, a, r, stream);
>>> }
}
}
MPI_Finalize()
}
>>> void start_repair(int i, int repairing[], Query q[], Answer q[], MPI_Request r[], Stream stream)
>>> {
>>> repairing[i] = true;
>>> push_query_back(stream, &q[i]);
>>> MPI_Comm_Irestart_rank(MPI_COMM_WORLD, i, &r[i]);
>>> }
Logic description (without FT)
The master code keeps track of the number of active workers through the active_workers variable. It is solely used for the purpose of shutdown. When the master is out of input, it shuts-down the workers by sending them 'done' message. It decrease the number of active workers and finalizes when this number reaches zero.
The master's progress engine waits on a vector of requests (note that entry 0 is not used, as to simplify the code); one it gets an answer it processes it and sends the next query to that worker until it's out of input.
Logic description (with FT)
The master detects a faulty client either synchronously when it ties to initiate an async receive (no need to check the send, the assumption is that if send failed, so will the receive call), or async when the async receive completes with an error. Once an error detected (and identified as a faulty client, more about this later), the master starts an async repair of that client. If the repair succeeds, new work is sent to that client. If it does not, the number of active workers is decreased and the master has to live with less processing power.
The code above assumes that if the returned code is an error, it should repair the worker; however as we discussed, there could very well be many different reasons for an error here, which not all are related to process failure; for that we might use something in lines of
if(MPI_Error_event(rc) == MPI_EVENT_PROCESS_DOWN)...
it would be the responsibility of the MPI implementation to encode or store the event related to the returned error code.
(Note: in MPICH2 there is a mechanism that enables encoding extended error information in the error code, which then can be retrieved using MPI_Error_string)
Conclusions
I believe that the solution above describes what we have discussed in the last meeting. The required API's to support this FT are really minimal but already cover a good set of users.
Please, send your comments.
Thoughts?
Thanks,
.Erez
P.S. I will post this on the FT wiki pages (with the feedbac).
P.P.S. there is one more scenario that we discussed, and extension of the master-workers model. I will try to get it write us as-soon-as-posible.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-ft/attachments/20090306/63e43cee/attachment-0001.html>
More information about the mpiwg-ft
mailing list