<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Dec 14, 2015 at 7:11 AM, Bland, Wesley <span dir="ltr"><<a href="mailto:wesley.bland@intel.com" target="_blank">wesley.bland@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Jeff. Thanks for the comments.<br>
<br>
On Dec 14, 2015, at 8:19 AM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><mailto:<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>>> wrote:<br>
<span class=""><br>
<br>
<br>
On Fri, Dec 11, 2015 at 2:09 PM, Bland, Wesley <<a href="mailto:wesley.bland@intel.com">wesley.bland@intel.com</a><mailto:<a href="mailto:wesley.bland@intel.com">wesley.bland@intel.com</a>>> wrote:<br>
Hi WG,<br>
<br>
I’ve put together some notes on the goings on around our working group at the forum. You can find them all on the wiki page:<br>
<br>
<a href="https://github.com/mpiwg-ft/ft-issues/wiki/2015-12-07" rel="noreferrer" target="_blank">https://github.com/mpiwg-ft/ft-issues/wiki/2015-12-07</a><br>
<br>
Since I know that the click-through is not always practical, I’ll copy them below.<br>
<br>
Thanks,<br>
Wesley<br>
<br>
====<br>
<br>
WG Meeting<br>
<br>
* Went over reading for plenary time<br>
* Aurelien and Keita presented some of the results of the ULFM BoF at SC<br>
* Attendance was great<br>
* There were a few questions and suggestions to improve the proposal.<br>
* Aurelien is creating issues for suggests that we will act on.<br>
* We discussed an overall view of what fault tolerance and error handling means in the context of MPI and how we cover each area as a standard<br>
* We divided applications into a few buckets:<br>
* Current applications - This describes the vast majority of applications which require that all process remain alive and recovery tends to be global.<br>
* These apps tend to use/require recovery very similar to checkpoint/restart.<br>
* They probably don't derive a lot of benefit from ULFM-like recovery models, but could potentially benefit from improved error handlers.<br>
<br>
Just remember that there can be ULFM + { hot spares -or- MPI_Comm_spawn(_multiple) } + checkpoint-restart, which preserves the size of the job.<br>
<br>
Lots of those fault-intolerant apps can do something like this with minimal changes. The apps that cannot use ULFM are the ones that require expensive message logging to be able to roll-back to a coherent state.<br>
<br>
</span>Agreed. We had some discussion of this in the room. The gist of the conversation was that between ULFM and existing C/R, most of these cases are covered in a reasonable way.<br>
<span class=""><br></span></blockquote><div><br></div><div>Sorry I missed this discussion f2f. I had a reason, but I cannot remember what it was.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
* In-memory Checkpoint/Restart - These apps can use in-memory checkpointing to improve both checkpoint and recovery times. They usually need to replace failed processes, but don't require that all remain alive.<br>
* ULFM is a possibility here, but can result in bad locality without a library which will automatically move processes around after a failure.<br>
<br>
Maybe. I'm not sure good locality exists on fat-tree and dragonfly topologies. And users can always renumber their ranks and move data around if they know topological placement matters.<br>
<br>
</span>That’s true. What we were talking about was having MPI be able to do that for you. It’s always possible for applications to do this themselves (as Aurelien pointed out in the room), but sometimes its such a pain that if MPI already has that information, it might be better to just provide it.<br>
<span class=""><br>
<br>
* Reinit / multi-init/finalize with improved PMPI would also work. There are some proposals going on or that have gone on which could also provide the needed functionality. In these proposals, most of the locality problems would probably be pushed into the MPI library when initialized again.<br>
* New applications - These apps tend to be able to run with fewer processes. They cover apps like tasking models, master/worker apps, and traditionally non-MPI apps that might be interested in the future (Hadoop, etc.).<br>
<br>
</span>Lots of current applications are master-worker…<br>
<br>
Sure. New here could just mean less than 30 years old. :)<br>
<span class=""><br></span></blockquote><div><br></div><div>Are you calling me old? :-)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
* ULFM generally would apply well to these applications as locality is less important if processes are not being replaced.<br>
* There are also errors that do not include process failures:<br>
* Memory errors<br>
<br>
</span>It is useful to distinguish causes and effects here. Memory errors are a cause. Process failure is one effect. Another effect is non-fatal data corruption, which may or may not be silent. Currently, we see that memory errors that are detected manifest as process failures, but hopefully someday the OS/RT people will figure out better things to do than just call abort(). Ok, to be fair, it's probably the firmware/driver people throwing the self-destruct lever…<br>
<br>
I should have been more precise. The errors we were talking about here are the ones that don’t result in process failure. We’re talking more about SDC types of errors.<br></blockquote><div><br></div><div>Ah ok. Thanks.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
<br>
* These could be detected by anything, but ULFM revoke could help with notification.<br>
* Lots of SDC research is out there that sits on top of MPI.<br>
* Network errors<br>
<br>
I am curious how often this actually happens anymore. Aren't all modern networks capable of routing around permanently failed links? A switch failure should be fatal but that happens how often?<br>
<br>
</span>Agreed. That’s why we didn’t focus on it too much. This is something that we generally push down to the implementation (or lower).<br>
<span class=""><br>
<br>
* These tend to be masked by the implementation or promoted to process failures<br>
* Resource exhaustion<br>
* These sorts of errors cover out of memory, out of context IDs, etc.<br>
* They can be improved with better error handlers/codes/classes<br>
<br>
Yes, and this will be wildly useful. Lots of users lose jobs because they do dumb stuff with communicators that could be mitigated with slower fallback implementations over p2p (please ignore the apparent false-dichotomy here).<br>
<br>
* Discussed some new topics related to error handling and error codes/classes<br>
* Pavan expressed interest in error codes saying whether they were catastrophic or not.<br>
* This resulted mpi-forum/mpi-issues#28<<a href="https://github.com/mpi-forum/mpi-issues/issues/28" rel="noreferrer" target="_blank">https://github.com/mpi-forum/mpi-issues/issues/28</a>> where we add a new call MPI_ERROR_IS_CATASTROPHIC.<br>
<br>
<<a href="https://github.com/mpiwg-ft/ft-issues/wiki/2015-12-07#plenary-time" rel="noreferrer" target="_blank">https://github.com/mpiwg-ft/ft-issues/wiki/2015-12-07#plenary-time</a>>Plenary Time<br>
<br>
* Read the error handler cleanup tickets mpi-forum/mpi-issues#1<<a href="https://github.com/mpi-forum/mpi-issues/issues/1" rel="noreferrer" target="_blank">https://github.com/mpi-forum/mpi-issues/issues/1</a>> and mpi-forum/mpi-issues#3<<a href="https://github.com/mpi-forum/mpi-issues/issues/3" rel="noreferrer" target="_blank">https://github.com/mpi-forum/mpi-issues/issues/3</a>>.<br>
* The forum didn't like where we removed all of the text about general errors. They considered some of it to still be valuable and should be updated. In particular, the example about MPI_RSEND could still be applicable if the implementation decides that it wants to return an error to the user because the MPI_RECV was not posted.<br>
* We need to add text for MPI_INTERCOMM_CREATE.<br>
* A few other minor things were added directly to the pull request.<br>
* Read the MPI_COMM_FREE advice ticket.<br>
* No concerns, will vote at next meeting.<br>
* Presented the plenary about catastrophic errors.<br>
* Few concerns were raised during the plenary. The main one was from Bill who says we should look at how other standards describe non-fatal errors when writing the text here.<br>
<br>
I'm skeptical that this is going to help us, but here are some references:<br>
- Fortran 2008 section 14.6 Halting (not going to be useful to us, although its utility for users is demonstrated in NOTE 14.16); section 2.3.5 describes how any error propagates to all images (in a coarray program). There is no notion of fault-tolerance here.<br>
- UPC and OpenSHMEM say nothing about fault-tolerance. I suspect that UPC never will and OpenSHMEM will try to learn from the MPI Forum.<br>
- C++14 draft (N3936) chapter 19 is all about exceptions and errors. 30.2.2 talks about thread failure.<br>
- IB 1.0 7.12.2 and 7.12.3, among other places.<br>
<br>
</span>Most specifications that I've read don't have all of the baggage we do, but only because most of their objects are not so stateful or long-lived. Now, if MPI was based upon connections rather than communicators…<br>
<br>
Thanks for the pointers. Maybe after the new year, we can do some homework and see what lessons we can take from here. Note that we weren’t talking about (non)catastrophic in terms of fault tolerance. It was mostly just for correct error handling and to let MPI remain defined in a few more error states than it already does (none).<br>
<br></blockquote><div><br></div><div>>0 is never a bad thing :-)<br><br>Jeff</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks,<br>
Wesley<br>
<span class=""><br>
<br>
* Ryan asked about the general usefulness of this proposal in terms of how an application would be able to respond to information about whether an error is fatal or not.<br>
* He asserts that error classes should generally be descriptive enough without it and if they aren't, the error class itself should be improved.<br>
<br>
Best,<br>
<br>
Jeff<br>
<br>
_______________________________________________<br>
mpiwg-ft mailing list<br>
</span><a href="mailto:mpiwg-ft@lists.mpi-forum.org">mpiwg-ft@lists.mpi-forum.org</a><mailto:<a href="mailto:mpiwg-ft@lists.mpi-forum.org">mpiwg-ft@lists.mpi-forum.org</a>><br>
<span class=""><a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-ft" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-ft</a><br>
<br>
<br>
<br>
--<br>
Jeff Hammond<br>
</span><a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><mailto:<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>><br>
<a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a><br>
_______________________________________________<br>
mpiwg-ft mailing list<br>
<a href="mailto:mpiwg-ft@lists.mpi-forum.org">mpiwg-ft@lists.mpi-forum.org</a><mailto:<a href="mailto:mpiwg-ft@lists.mpi-forum.org">mpiwg-ft@lists.mpi-forum.org</a>><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-ft" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-ft</a><br>
<div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
mpiwg-ft mailing list<br>
<a href="mailto:mpiwg-ft@lists.mpi-forum.org">mpiwg-ft@lists.mpi-forum.org</a><br>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-ft" rel="noreferrer" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-ft</a></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div>