[mpiwg-tools] Vector MPI_T reads

Schulz, Martin schulzm at llnl.gov
Wed Dec 4 09:53:15 CST 2013


I agree - interesting, looking forward to hearing more. I agree also with Kathryn - some of those don't sound like MPI_T, but this could also speak for an integration through PAPI so that tools can grab all data from one source.

We are getting quite a list to talk about (I'll also have some for the PMPI2 ready and I think we should take a look at Craig's wrapper generator) - I know Jeff is also busy in the hybrid WG, but we could have some of the discussions on Tuesday parallel to RMA if we need/want the time. Fab has two rooms for us all day on Tuesday as well.

Martin


On Dec 4, 2013, at 7:36 AM, Kathryn Mohror <kathryn at llnl.gov>
 wrote:

> 
> 
>> FWIW, we're getting some interesting asks from users for MPI_T pvars:
>> 
>> - underlying network data
>>  - But then how do you correlate it to MPI activity?  Not as easy as you would think.
>> - other hardware / environment data
>>  - CPU temperature, fan speed, ...etc. (think: IPMI-like data)
>>  - Should these be reported once per server, or in every MPI process?
>> - stats that have traditionally been exposed via PMPI
>>  - E.g., number of times MPI_Send was invoked
>> - other MPI layer data
>>  - E.g., average depth of unexpected queue
>> 
>> Of this list, I think "other MPI layer data" is the only type of data I expected to expose via MPI_T (i.e., it's kinda what we designed MPI_T for).  The others are all use cases that I was not previously expecting.  Interesting.
> 
> Some of these are out of the scope that I would expected to be provided by MPI_T and could be provided by other libraries. I agree that I imagined it would be information specific to the MPI library or whatever it was already collecting. But, you are right, interesting!
> 
> Kathryn
> 
>> 
>> 
>> 
>> On Dec 4, 2013, at 10:20 AM, Kathryn Mohror <kathryn at llnl.gov> wrote:
>> 
>>> Yes. I'll put it on the agenda.
>>> 
>>> Kathryn
>>> 
>>> On Dec 4, 2013, at 3:27 AM, Jeff Squyres (jsquyres) <jsquyres at cisco.com> wrote:
>>> 
>>>> Can we add this to the agenda to discuss next week?
>>>> 
>>>> 
>>>> On Dec 4, 2013, at 1:05 AM, "Schulz, Martin" <schulzm at llnl.gov> wrote:
>>>> 
>>>>> I think this could be interesting and helpful, but wouldn't this be expensive in the implementation, at least for some variables? Would we need some way to say which variables can be read atomically?
>>>>> 
>>>>> Martin 
>>>>> 
>>>>> 
>>>>> On Dec 3, 2013, at 2:27 PM, "Jeff Squyres (jsquyres)" <jsquyres at cisco.com>
>>>>> wrote:
>>>>> 
>>>>>> I'm trolling through SC-accumulated emails and saw one that prompted me to ask here (I think I asked about this before, but don't remember): is there any interest in atomic vector reads of MPI_T variables?
>>>>>> 
>>>>>> I ask because, especially for pvars, if you loop over reading a bunch of them, they're not atomic, and you might not get consistent values.  But if you can issue a single MPI_T read for N values all at once, you have a much better chance of getting an atomic/consistent set of values.
>>>>>> 
>>>>>> -- 
>>>>>> Jeff Squyres
>>>>>> jsquyres at cisco.com
>>>>>> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>>>>>> 
>>>>>> _______________________________________________
>>>>>> mpiwg-tools mailing list
>>>>>> mpiwg-tools at lists.mpi-forum.org
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>>>>> 
>>>>> ________________________________________________________________________
>>>>> Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
>>>>> CASC @ Lawrence Livermore National Laboratory, Livermore, USA
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> mpiwg-tools mailing list
>>>>> mpiwg-tools at lists.mpi-forum.org
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>>>> 
>>>> 
>>>> -- 
>>>> Jeff Squyres
>>>> jsquyres at cisco.com
>>>> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>>>> 
>>>> _______________________________________________
>>>> mpiwg-tools mailing list
>>>> mpiwg-tools at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>>> 
>>> ______________________________________________________________
>>> Kathryn Mohror, kathryn at llnl.gov, http://people.llnl.gov/mohror1
>>> CASC @ Lawrence Livermore National Laboratory, Livermore, CA, USA
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> mpiwg-tools mailing list
>>> mpiwg-tools at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>> 
>> 
>> -- 
>> Jeff Squyres
>> jsquyres at cisco.com
>> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>> 
>> _______________________________________________
>> mpiwg-tools mailing list
>> mpiwg-tools at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
> 
> ______________________________________________________________
> Kathryn Mohror, kathryn at llnl.gov, http://people.llnl.gov/mohror1
> CASC @ Lawrence Livermore National Laboratory, Livermore, CA, USA
> 
> 
> 
> 
> _______________________________________________
> mpiwg-tools mailing list
> mpiwg-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools

________________________________________________________________________
Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
CASC @ Lawrence Livermore National Laboratory, Livermore, USA






More information about the mpiwg-tools mailing list