[Mpi3-ft] Piggybacking API

Supalov, Alexander alexander.supalov at intel.com
Tue Apr 22 05:57:55 CDT 2008


Possibly less so if this stuff is put into an optional subset, I hope.
In that case only those who care will, er, care. 

-----Original Message-----
From: mpi3-ft-bounces at lists.mpi-forum.org
[mailto:mpi3-ft-bounces at lists.mpi-forum.org] On Behalf Of Terry Dontje
Sent: Tuesday, April 22, 2008 12:55 PM
To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
Subject: Re: [Mpi3-ft] Piggybacking API

Martin Schulz wrote:
> At 06:36 AM 4/21/2008, Terry Dontje wrote:
>   
>> So I reread the piggybacking document on wiki.  I am not thrilled
with
>> the amount of new APIs this would be adding to the standard but can
also
>> see the point of the paper.  I am curious how the new API is expected
to
>> be used?  The proposal say's this API is needed for user-level fault
>> tolerance solutions.  So do we expect a user to change all
application
>> calls to the MPI library to use the PB calls?  I wonder if a more
>> general solution that doesn't require a direct change to the API
would work.
>>     
>
> Layers like the one for fault tolerance will have to intercept
> all MPI communication calls anyway (otherwise it will not be
> possible to capture the state of the MPI layer) and hence the
> most likely mechanism to implement them is the PMPI layer. At
> this point it will be very easy and fully transparent to also
> replace communication calls with the piggyback counterparts.
> The application would not see the difference (if we get the API
> right :) ).
>
>
>   
>> I wonder if there might be a way one could register piggybacking with
a
>> communicator and somehow have the actual piggybacking occur as a
>> callback from an implementations messaging layer?
>>     
>
> Registering piggyback data beforehand (whether per communicator
> or globally) has the problem that in a fully multithreaded use
> of MPI piggyback data can no longer be associated with a single
> MPI call/message (unless we make the piggyback data thread local,
> but this would require a concept of threads in MPI and I don't
> think we should go there).
>
>   
Fair enough, however with the numerous variations of send calls it is
already confusing as to which ones to use.  IMO adding PB api's
for each of the send apis is just asking for more user confusion. 

--td
> Martin
>
>
>
>   
>> Just a thought,
>>
>> --td
>> _______________________________________________
>> mpi3-ft mailing list
>> mpi3-ft at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>>     
>
>
_______________________________________________________________________
> Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulz6
> CASC @ Lawrence Livermore National Laboratory, Livermore, USA  
>
> _______________________________________________
> mpi3-ft mailing list
> mpi3-ft at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
>   

_______________________________________________
mpi3-ft mailing list
mpi3-ft at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.





More information about the mpiwg-ft mailing list