[Mpi3-tools] lightweight event log

Martin Schulz schulzm at llnl.gov
Tue Jul 7 21:24:41 CDT 2009


Hi Marc-Andre,

On Jul 3, 2009, at 12:30 AM, Marc-Andre Hermanns wrote:

> Hi Martin,
>
>> Note, though, that you (as a tool writer) can always combine this
>> logging interface with a PMPI logger and can grab any cleanly defined
>> logging information from there as well.
>
> Understood. What is still unclear to me, however, is how to  
> incorporate
> the log information at runtime. Usually, I enter the wrapper create a
> timestamped enter-record, do whatever I want/need to do, call the PMPI
> call and create a timestamped exit-record. Now, in between I get  
> another
> timestamped buffer that can only be interpreted later on? Then I could
> also just save the 'blob' with my trace and interpret the contents  
> later on.

Yes, that is an option. During the pre-call wrapper, you can enable the
fine grain logging and in the post-call wrapper you can then access
the log buffer with all events that happened in between. Having an
option at that time to save that buffer is a good idea, but would  
require
the post processing tool to link against the same MPI on the same
machine, since you will get only then the accessor functions that can
interpret the buffer.

>
> I can envision recording timespans with this to identify where the
> communication stalled or was making progress. In relation to the same
> data on other processes, this might become handy in reasoning about  
> the
> communication pattern used.

Yes, that's a good use case.

>
>>>> 2) it is lightweight for two reasons:
>>>> (1) the data logged may be as small and simple as the
>>>> implementation chooses and (2) the data may be logged in an
>>>> internal, time and space efficient format.  A separate set of
>>>> routines are provided to interpret the logged data.  For example,
>>>> the timestamps may be stored in an internal format (such as the
>>>> bits from a clock register) and converted to seconds only when
>>>> the user evaluates the logged data.
>
>>> Combining your answers to point 1 and 2, I envision to
>>> pre-construct memory-layouts (like or even with MPI datatypes) to
>>> the relevant information, so that a tool can basically first tell
>>> the implementation what it want to be logged, then define handles
>>> for the data layout, and
>>
>> I am not sure if it's worth having the user pick what should be
>> recorded and what not, since this will be very limited anyway.
>
> The user in this case is not necessarily the end-user but a tool. As  
> you
> said above, a tool might want to combine these logs with the  
> information
> gathered in a wrapper. The tool then might have an interest in  
> defining
> a-priori what it wants to gather, to minimize overhead. However, you
> might be right that this flexibility might bear more overhead than a
> fixed set of predefined sets of logging variables.

This is a question for the MPI implementors about what they prefer or  
think
is faster. It also depends on how many things/points an implementation
will log. Basically it is a tradeoff between recording all information  
and
testing which information should be recorded. I think some MPIs like
MPICH already do something like this internally in debug builds, so it
would be good to get some feedback from them on this issue.

>
> Do I understand it correctly that the internal data should be  
> converted
> on reading the buffer, or is a post-mortem interface possibly the  
> better
> choice? This way any conversion can be delayed until we are no  
> longer in
> measurement.

I think either way is fine. As mentioned above, we can add a mechanism
to simply store the whole buffer in raw form into  a buffer provided  
by the
tool and then allow later inspection of it. However, as also said  
above, the
tool will have to link against that particular MPI version on that  
particular
platform to allow a correct interpretation of the data. This should be
doable, though, at least for some post-processing converter.

Martin


>
> Best regards,
> Marc-Andre
> -- 
> Marc-Andre Hermanns
> Juelich Supercomputing Centre
> Institute for Advanced Simulation
> Forschungszentrum Juelich GmbH
> D-52425 Juelich
> Germany
>
> Phone : +49-2461-61-2054
> Fax   : +49-2461-61-6656
> eMail : m.a.hermanns at fz-juelich.de
> WWW   : http://www.fz-juelich.de/jsc/
>
> JSC is the coordinator of the
> John von Neumann Institute for Computing
> and member of the
> Gauss Centre for Supercomputing
>
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzende des Aufsichtsrats: MinDir'in Baerbel Brumme-Bothe
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
>                    Dr. Ulrich Krafft (stellv. Vorsitzender),
>                    Prof. Dr. Harald Bolt,
>                    Prof. Dr. Sebastian M. Schmidt
> _______________________________________________
> Mpi3-tools mailing list
> Mpi3-tools at lists.mpi-forum.org
> http://*lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-tools

_______________________________________________________________________
Martin Schulz, schulzm at llnl.gov, http:// people.llnl.gov/schulz6
CASC @ Lawrence Livermore National Laboratory, Livermore, USA








More information about the mpiwg-tools mailing list