[Mpi3-ft] Transactional Messages

Richard Graham rlgraham at ornl.gov
Fri Feb 22 21:55:22 CST 2008


Just to follow up, I think that the ³right² thing to do with respect to some
sort of
 transactional model is to have some sort of standard way to request such
 communications take place ­ probably at init time.  We have had such an MPI
 implementation running in production for several years on a multi-thousand
 process cluster, and the only thing that needs to be exposed to the users
is the
 ability to turn on/off  the functionality ­ all the rest is taken care of
just fine within
 the context of the MPI 2.0 standard, and is 100% standard compliant.

This does not deal with hints on the network ³state².

Rich


On 2/22/08 10:22 PM, "Greg Bronevetsky" <bronevetsky1 at llnl.gov> wrote:

> 
> 
>> >I've read the Transactional Messages proposal and I am a ittle confused
>> >here.  Is there a reason why we believe that message faults themselves
>> >should be handled by the application layer instead of the MPI library?
>> >Using the latter model allows one to reduce the error conditions
>> >perculated up to the user to revolve around loss of the actual
>> >connection to a process (or the actual process itself).
> 
> Actually, one aspect of the proposal is that I made sure not to
> define message faults at a low level. They may be any low-level
> problems that the implementation cannot efficiently deal with on its
> own and that are best represented to the application as message
> drops. One example of this may be process failures. Although we will
> probably want to define a special notification mechanism to expose
> those failures to the application, we will also need a way to expose
> the failures of any communication that involves the process. Another
> example may be simplified MPI implementations that do not have
> facilities for resending messages because the probability of an error
> is rather low and performance is too important. In fact, applications
> that can tolerate message drops may explicitly choose those MPI
> implementations for the performance gains.
> 
> Greg Bronevetsky
> Post-Doctoral Researcher
> 1028 Building 451
> Lawrence Livermore National Lab
> (925) 424-5756
> bronevetsky1 at llnl.gov
> _______________________________________________
> Mpi3-ft mailing list
> Mpi3-ft at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-ft/attachments/20080222/1149e9de/attachment-0001.html>


More information about the mpiwg-ft mailing list