[Mpi3-tools] lightweight event log

Marc-Andre Hermanns m.a.hermanns at fz-juelich.de
Wed Jul 8 02:15:04 CDT 2009


Hi Martin,

>>> Note, though, that you (as a tool writer) can always combine this
>>> logging interface with a PMPI logger and can grab any cleanly defined
>>> logging information from there as well.

>> Understood. What is still unclear to me, however, is how to 
>> incorporate the log information at runtime. Usually, I enter the
>> wrapper create a timestamped enter-record, do whatever I want/need
>> to do, call the PMPI call and create a timestamped exit-record.
>> Now, in between I get another timestamped buffer that can only be
>> interpreted later on? Then I could also just save the 'blob' with
>> my trace and interpret the contents later on.

> Yes, that is an option. During the pre-call wrapper, you can enable
> the fine grain logging and in the post-call wrapper you can then
> access the log buffer with all events that happened in between.
> Having an option at that time to save that buffer is a good idea, but
> would require the post processing tool to link against the same MPI
> on the same machine, since you will get only then the accessor
> functions that can interpret the buffer.

For a tool like ours it would not be a problem, as we distribute in
source anyway, yet, I understand that this is not an option for every
tool vendor. I think it should be doable, though, to have 'adapters' for
a tool distributed in binary form, that is compiled at installation
time, which does not contain any 'business logic' but just the interface
to the MPI library.

>>> I am not sure if it's worth having the user pick what should be
>>> recorded and what not, since this will be very limited anyway.

>> The user in this case is not necessarily the end-user but a tool.
>> As you said above, a tool might want to combine these logs with the
>>  information gathered in a wrapper. The tool then might have an
>> interest in defining a-priori what it wants to gather, to minimize
>> overhead. However, you might be right that this flexibility might
>> bear more overhead than a fixed set of predefined sets of logging
>> variables.
> 
> This is a question for the MPI implementors about what they prefer or
> think is faster. It also depends on how many things/points an
> implementation will log. Basically it is a tradeoff between recording
> all information and testing which information should be recorded. I

I think anything that can be done to reduce trace size is worth to be
discussed.

> think some MPIs like MPICH already do something like this internally
> in debug builds, so it would be good to get some feedback from them
> on this issue.

Let's see what the discussion brings next time. I am still not sure
whether I can make it to the next call. I have a workshop outside of
Juelich on Monday till 5:30 PM CEST. I will try to get internet access
and get out a little early to join in time, but I cannot promise
anything at this moment.

>> Do I understand it correctly that the internal data should be 
>> converted on reading the buffer, or is a post-mortem interface
>> possibly the better choice? This way any conversion can be delayed
>> until we are no longer in measurement.
>
> I think either way is fine. As mentioned above, we can add a
> mechanism to simply store the whole buffer in raw form into a buffer
> provided by the tool and then allow later inspection of it. However,
> as also said above, the tool will have to link against that
> particular MPI version on that particular platform to allow a correct
> interpretation of the data. This should be doable, though, at least
> for some post-processing converter.

What I didn't think of is that not all tools necessarily build against
the MPI library. Thus you are right in that we might need both
interfaces, one for the in-measurement conversion and one for
post-mortem conversion.

I think any post-processing conversion needs to be done on-the-fly by
the tools to avoid re-writing the trace data.

Best regards,
Marc-Andre
-- 
Marc-Andre Hermanns
Juelich Supercomputing Centre
Institute for Advanced Simulation
Forschungszentrum Juelich GmbH
D-52425 Juelich
Germany

Phone : +49-2461-61-2054
Fax   : +49-2461-61-6656
eMail : m.a.hermanns at fz-juelich.de
WWW   : http://www.fz-juelich.de/jsc/

JSC is the coordinator of the
John von Neumann Institute for Computing
and member of the
Gauss Centre for Supercomputing

Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzende des Aufsichtsrats: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
                    Dr. Ulrich Krafft (stellv. Vorsitzender),
                    Prof. Dr. Harald Bolt,
                    Prof. Dr. Sebastian M. Schmidt
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 6042 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-tools/attachments/20090708/07fd9050/attachment-0001.bin>


More information about the mpiwg-tools mailing list