[mpiwg-tools] MPI_T usage question: other sensors

Schulz, Martin schulzm at llnl.gov
Thu Dec 19 11:36:32 CST 2013


On Dec 19, 2013, at 1:43 AM, Michael Knobloch <m.knobloch at fz-juelich.de>
 wrote:

> Hi all,
> 
> 
> On 19.12.2013 02:19, Schulz, Martin wrote:
>> Hi all,
>> 
>> I agree that a single interface to extract any kind of information from a system (MPI and beyond) would be very helpful and make our tools simpler. As Michael, though, I always thought of PAPI being that unifying interface - it should be very easy to write an MPI_T component to PAPI (in fact, I talked with Dan Terpstra from the PAPI team a few years ago when MPI_T was still under design and he thought that should work) and with that offer all MPI_T counters through PAPI (probably not the control variables, though). This should also allow us to provide "standardized" names for common performance variables (similar to what PAPI does for HW counters).
>> 
> 
> I'm not so sure that writing a PAPI component for MPI_T will be that
> simple. It's a while back since I wrote my last PAPI component, but if I
> remember correctly such a component requires the names and types of
> counters at compile time, which is something that the MPI_T interface
> cannot provide. I mean I have no problem to build a PAPI component for
> each MPI3 library out there (not that there are too many) or even a
> generalized one which queries such information at compile time (although
> this might be difficult on frontend-backend archtitectures), but the
> standard states that I cannot assume the same set of counters even
> between two runs or on different processes (which is horrible btw.).

Hmm, that would not be good - when I talked with Dan a few years back, we already had this very dynamic query scheme and he didn't pick this up as a problem, so I thought we were good. However, he may also just 

>> The situation changes a bit, though, if MPI implementations query other information, like temperature or power, for themselves (Jeff, was this why you were asking?). In this case, the MPI implementation should use a standardized interface like PAPI itself (to avoid conflicts with tools), which could lead us to some strange circular SW dependencies.
>> 
> 
> I don't even see how that could work in a meaningful way. Querying MPI_T
> variables is process local, while several counters (especially such as
> power and temperature) are shared between several processes, some even
> with restricted access (i.e. only one process might query the counter).

This is probably similar to the support for UNCORE events that is needed already for hardware counters - there isn't a clean solution for that either. This may even get more complicated if we want data from components that are not nodes (e.g., switches in the network). It's not clear how they get mapped back.

> So supporting this is either a huge implementation work or using an
> existing interface, i.e. PAPI. And I completely agree with Martin that
> we should try to avoid circular dependencies, especially if there's no
> real benefit for the user.
> 
> -Michael

Martin



> --
> Michael Knobloch
> Institute for Advanced Simulation (IAS)
> Jülich Supercomputing Centre (JSC)
> Telefon: +49 2461 61-3546
> Telefax: +49 2461 61-6656
> E-Mail: m.knobloch at fz-juelich.de
> Internet: http://www.fz-juelich.de/jsc
> 
> 
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> 

________________________________________________________________________
Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
CASC @ Lawrence Livermore National Laboratory, Livermore, USA






More information about the mpiwg-tools mailing list