[mpiwg-tools] MPI_T usage question: other sensors

Schulz, Martin schulzm at llnl.gov
Wed Dec 18 19:19:44 CST 2013

Hi all,

I agree that a single interface to extract any kind of information from a system (MPI and beyond) would be very helpful and make our tools simpler. As Michael, though, I always thought of PAPI being that unifying interface - it should be very easy to write an MPI_T component to PAPI (in fact, I talked with Dan Terpstra from the PAPI team a few years ago when MPI_T was still under design and he thought that should work) and with that offer all MPI_T counters through PAPI (probably not the control variables, though). This should also allow us to provide "standardized" names for common performance variables (similar to what PAPI does for HW counters).

The situation changes a bit, though, if MPI implementations query other information, like temperature or power, for themselves (Jeff, was this why you were asking?). In this case, the MPI implementation should use a standardized interface like PAPI itself (to avoid conflicts with tools), which could lead us to some strange circular SW dependencies.


On Dec 18, 2013, at 11:05 AM, Michael Knobloch <m.knobloch at fz-juelich.de> wrote:

> Hi all,
> On 12/18/2013 05:09 PM, Marc-Andre Hermanns wrote:
>> Hi Jeff,
>> as a tools provider I would also like MPI_T to expose other sensors, as
>> this means I don't have to support yet another metric interface. Not
>> stacking a multitude of different metric interfaces on top of each other
>> might also cut down on the overhead as you said.
> Coming from the same tools group I unfortunately have to disagree with
> Marc-Andre on that. Most of the tools out there (if not all relevant for
> HPC) already have good support for PAPI, which does a great job of
> providing a portable API to counters of all kind. And it's not an easy
> job to do, so I'm not sure every MPI implementation would want to copy
> that work. And that would be necessary if we'd only want to use MPI_T as
> a single counter source, otherwise we'd still have to maintain both (in
> fact, we'd for sure have to, as there are other parallel programming
> paradigms out there as well which tools have to support).
> So what I'd prefer is a PAPI component exposing the MPI relevant
> counters via PAPI, that way the tools could benefit from that with very
> little additional work.
>> But could an implementation not expose such info right now? I thought
>> the whole idea of not standardizing anything about counters, their names
>> and semantics, is that implementations can expose whatever they care about.
> Here I agree, from what I read in the current standard the interface
> doesn't guarantee me anything. So at least some kind of
> high-level/low-level interface as in PAPI would certainly be desirable.
> -Michael
> --
> Michael Knobloch
> Institute for Advanced Simulation (IAS)
> Jülich Supercomputing Centre (JSC)
> Telefon: +49 2461 61-3546
> Telefax: +49 2461 61-6656
> E-Mail: m.knobloch at fz-juelich.de
> Internet: http://www.fz-juelich.de/jsc
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> _______________________________________________
> mpiwg-tools mailing list
> mpiwg-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools

Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
CASC @ Lawrence Livermore National Laboratory, Livermore, USA

More information about the mpiwg-tools mailing list