[Mpi3-ft] Choosing a BLANK or SHRINK model for the RTS proposal

Josh Hursey jjhursey at open-mpi.org
Wed Jan 25 15:20:00 CST 2012


After discussion on the call it was decided that we will continue with the
BLANK semantics. Some notes on the rationale from the discussion:
 - Though it might be possible to get BLANK from SHRINK, it would be
difficult and force applications to use a non-standard third-party library.
To get SHRINK from BLANK is trivial - MPI_Comm_split().
 - There are a few examples that use SHRINK, but can be just as easily
implemented with BLANK. Manager/Worker applications were cited. So a strong
use case where SHRINK is the preferred model could not be referenced.
 - The use case for BLANK is based in applications where the name of the
process in the communicator is meaningful in referencing data blocks (e.g.,
matrix operations). In some of these algorithms a checksum is maintained in
a spare column/row. MPI_Reduce is used to generate this value. Though most
of these algorithms require a replacement of the missing process, it is not
strictly necessary for all failure modes.
 - A BLANK mode makes it generally easier to reason about process recovery.
 - SHRINK requires fewer changes to the current MPI standard, but the
additional statements to support BLANK are generally non-controversial (we
will discuss collectives in another email thread).

So after a lengthy discussion it was decided by the group to keep the
current model based on a BLANK-like mode. This is based upon both the
flexibility of the mode, and assessment of the use cases.

Some requested references are at the bottom. There are others, but this
should get someone so interested started.

-- Josh

----------------------------------------------
Bosilca, G., Delmas, R., Dongarra, J., Langou, J.
"Algorithm-based fault tolerance applied to high performance computing"
2008
http://dx.doi.org/10.1016/j.jpdc.2008.12.002
----------------------------------------------
Engelmann, C., Geist, A.
"Super-Scalable Algorithms for Computing on 100,000 Processors"
2005
http://www.springerlink.com/index/10.1007/11428831_39
----------------------------------------------
Huang, K.H., Abraham, J.A.
"Algorithm-Based Fault Tolerance for Matrix Operations"
1984
http://dx.doi.org/10.1109/TC.1984.1676475
----------------------------------------------

On Wed, Jan 25, 2012 at 5:42 AM, TERRY DONTJE <terry.dontje at oracle.com>wrote:

>  One of the things that I am not enamored with the SHRINK mode is you
> cannot implement the BLANK with it but the converse is true.   So to me the
> BLANK mode seems like a lower level of routines that you can build up from
> which seems like a good idea though I admit it might be a little more
> complicated but I am not sure it is that much more complicated than SHRINK.
>
> --td
>
>
> On 1/24/2012 2:41 PM, Graham, Richard L. wrote:
>
> I will re-iterate what I said before.  While this may one mode that apps may want to use, it is not the only mode.  In particular, this forces all ranks in a communicator to know about the change, even if they have implemented a "local" algorithm that does not need to know about all failures.
>
> Rich
>
> On Jan 24, 2012, at 2:09 PM, Sur, Sayantan wrote:
>
> Hi Josh,
>
> Thanks for the crisp characterization of the proposal I was making. It is correct. I was naturally thinking of the SHRINK mode, since it involves least number of changes to MPI standard itself. Folks at the forum also had similar thoughts (e.g. why does MPI_Comm_size() still return count including failed procs).
>
> Cf. http://www.netlib.org/utk/people/JackDongarra/PAPERS/isc2004-FT-MPI.pdf
>
> “4.2 FTMPI_COMM_MODE_SHRINK
>
> In this communicator mode, the ranks of MPI processes before and after  recovery might change, as well as the size of MPI COMM WORLD does change. The appealing part of this communicator mode however is, that all functions specified in MPI-1 and MPI-2 are still valid without any further modification, since groups and communicators do not have wholes (sic) and blank processes.”
>
> We can discuss further tomorrow as to whether we could go with SHRINK mode (simplifying the proposal). From what I read in the paper, they report being able to convert fault-tolerant master/worker to use both modes.
>
> Thanks!
>
> ===
> Sayantan Sur, Ph.D.
> Intel Corp.
>
> From: mpi3-ft-bounces at lists.mpi-forum.org<mailto:mpi3-ft-bounces at lists.mpi-forum.org> <mpi3-ft-bounces at lists.mpi-forum.org> [mailto:mpi3-ft-bounces at lists.mpi-forum.org <mpi3-ft-bounces at lists.mpi-forum.org>] On Behalf Of Josh Hursey
> Sent: Tuesday, January 24, 2012 10:39 AM
> To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
> Subject: [Mpi3-ft] Choosing a BLANK or SHRINK model for the RTS proposal
>
> First let me say that I greatly appreciate the effort of Sayantan and others to push us towards considering alternative techniques, and stimulating discussion about design decisions. This is exactly the type of discussion that needs to occur, and the working group is the most appropriate place to have it.
>
>
> One of the core suggestions of Sayantan's proposal is the switch from (using FT-MPI's language) a model like BLANK to a model like SHRINK. I think many of the other semantics are derived from this core shift, so we should probably focus the discussion on this point earlier in our conversation.
>
>
> The current RTS proposal allows for a communicator to contain failed processes and continue to be used for all operations, including collectives, after acknowledging them. This matches closely to FT-MPI's BLANK mode. The user can use MPI_Comm_split() to get the equivalent of SHRINK if they need it.
>
> The suggested modification allows for only/primarily a SHRINK-like mode in order to have full functionality in the communicator. As discussed on the previous call, one can get the BLANK mode by adding a library on top of MPI that virtualizes the communicators to create shadow communicators. The argument for the SHRINK mode is that it is -easier- to pass/explain.
>
> The reason we chose BLANK was derived from the literature reviewed, code examples available, and feedback from application groups. >From which there seemed to be a strong demand for the BLANK mode. In fact, I had a difficult time finding good use cases for the SHIRNK mode (I'm still looking though). Additionally, a BLANK mode seems also to make it easier to reason about process recovery. To reason about process recovery (something like FT-MPI's REBUILD mode) one needs to be able to reason about the missing processes without changing the identities of the existing processes, which can be difficult in a SHRINK mode. So from this review it seemed that there was an application demand for a BLANK-like mode for the RTS proposal.
>
> In light of this background, it is concerning to me to advise these application users that MPI will not provide the functionality they require, but they have to depend upon a non-standard, third-party library because we shied away from doing the right thing by them. This background is advised from my review of the state of the art, but others may have alternative evidence/commentary to present as well that could sway the discussion. It just seems like a weak argument that we should do the easy thing at the expense of doing the right thing by the application community.
>
>
> I certainly meant this email to stimulate conversation for the teleconference tomorrow. In particular, I would like those on the list with experience building ABFT/Natural FT applications/libraries (UTK?) to express their perspective on this topic. Hopefully they can help guide us towards the right solution, which might just be a SHRINK-like mode.
>
>
> -- Josh
>
> --
> Joshua Hursey
> Postdoctoral Research Associate
> Oak Ridge National Laboratoryhttp://users.nccs.gov/~jjhursey
> _______________________________________________
> mpi3-ft mailing listmpi3-ft at lists.mpi-forum.org<mailto:mpi3-ft at lists.mpi-forum.org> <mpi3-ft at lists.mpi-forum.org>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>
>
> _______________________________________________
> mpi3-ft mailing listmpi3-ft at lists.mpi-forum.orghttp://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>
>
> --
>   Terry D. Dontje | Principal Software Engineer
> Developer Tools Engineering | +1.781.442.2631
>  Oracle * - Performance Technologies*
>  95 Network Drive, Burlington, MA 01803
> Email terry.dontje at oracle.com
>
>
>
>
> _______________________________________________
> mpi3-ft mailing list
> mpi3-ft at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>



-- 
Joshua Hursey
Postdoctoral Research Associate
Oak Ridge National Laboratory
http://users.nccs.gov/~jjhursey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-ft/attachments/20120125/0f78d1d2/attachment-0001.html>


More information about the mpiwg-ft mailing list