[Mpi3-ft] Summary of today's call

Edgar Gabriel gabriel at cs.uh.edu
Mon Mar 31 11:51:49 CDT 2008


the section on fault-handling on the wiki does not mention error 
classes. Do we want to address that, i.e. does the concept make sense 
and we suggest to keep it, do we modify the concept since the original 
idea and its benefits are not entirely clear or do we want to get rid of 
error classes?

second, if I remember right, MPI_FILE_NULL has a different default error 
handler than MPI_COMM_WORLD. Do we want to mention somewhere in the 
document, that different objects might have different default error 
handlers attached? You have a distinction between COMM_WORLD and 
COMM_SELF -- which is good, but maybe we should explicitly mention other 
objects as well...


Supalov, Alexander wrote:
> Dear Rich,
> Thanks. We had a scheduled site maintenance and my phone connection was
> terminated for some reason. Sorry for that.
> Sure, I can take on fleshing out the rest of the fault handling
> proposal. I invite everybody to contribute to the respective Wiki page
> in the meantime (see
> http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Fault%20Handling),
> and/or send comments to me or this list.
> Best regards.
> Alexander
> -----Original Message-----
> From: mpi3-ft-bounces at lists.mpi-forum.org
> [mailto:mpi3-ft-bounces at lists.mpi-forum.org] On Behalf Of Richard Graham
> Sent: Saturday, March 29, 2008 12:29 AM
> To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
> Subject: [Mpi3-ft] Summary of today's call
> Here is a summary of the action items from today's call:
>  - The consumer/producer scenario is ready for discussion before the
> full
> forum, once we have nailed down the approach to error handling within
> MPI.
> The plan is for Edgar to bring this to the full forum for discussion at
> the
> April meeting.
>  - The approach to error handling is almost ready to bring to the full
> forum.  Several items came up in today's call (async notification and
> handler stacking) that still need to be fleshed out and fully defined.
> Alexander, since you were kicked off the call - would you be willing to
> coordinate getting this ready for discussion in the full forum setting
> at
> the April meeting ?  We should discuss this again an the call in 2 weeks
> (4/11/2008) and go over the remaining items.
>  - Josh will work with ? to get the proposal for non-blocking dynamic
> process creation ready to go.  We should aim to continue discussions on
> this
> next time, but this may not be ready by the April meeting.  If the forum
> is
> amenable to this proposal, it should be integrated with the rest of
> dynamic
> process control chapter.
> Also, time permitting we will discuss the data piggy backing proposal on
> the
> next call.
> What did I forget?
> Rich
> _______________________________________________
> mpi3-ft mailing list
> mpi3-ft at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
> ---------------------------------------------------------------------
> Intel GmbH
> Dornacher Strasse 1
> 85622 Feldkirchen/Muenchen Germany
> Sitz der Gesellschaft: Feldkirchen bei Muenchen
> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
> Registergericht: Muenchen HRB 47456 Ust.-IdNr.
> VAT Registration No.: DE129385895
> Citibank Frankfurt (BLZ 502 109 00) 600119052
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
> _______________________________________________
> mpi3-ft mailing list
> mpi3-ft at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft

Edgar Gabriel
Assistant Professor
Parallel Software Technologies Lab      http://pstl.cs.uh.edu
Department of Computer Science          University of Houston
Philip G. Hoffman Hall, Room 524        Houston, TX-77204, USA
Tel: +1 (713) 743-3857                  Fax: +1 (713) 743-3335

More information about the mpiwg-ft mailing list