[mpiwg-tools] Vector MPI_T reads
Jeff Squyres (jsquyres)
jsquyres at cisco.com
Wed Dec 4 09:26:46 CST 2013
FWIW, we're getting some interesting asks from users for MPI_T pvars:
- underlying network data
- But then how do you correlate it to MPI activity? Not as easy as you would think.
- other hardware / environment data
- CPU temperature, fan speed, ...etc. (think: IPMI-like data)
- Should these be reported once per server, or in every MPI process?
- stats that have traditionally been exposed via PMPI
- E.g., number of times MPI_Send was invoked
- other MPI layer data
- E.g., average depth of unexpected queue
Of this list, I think "other MPI layer data" is the only type of data I expected to expose via MPI_T (i.e., it's kinda what we designed MPI_T for). The others are all use cases that I was not previously expecting. Interesting.
On Dec 4, 2013, at 10:20 AM, Kathryn Mohror <kathryn at llnl.gov> wrote:
> Yes. I'll put it on the agenda.
>
> Kathryn
>
> On Dec 4, 2013, at 3:27 AM, Jeff Squyres (jsquyres) <jsquyres at cisco.com> wrote:
>
>> Can we add this to the agenda to discuss next week?
>>
>>
>> On Dec 4, 2013, at 1:05 AM, "Schulz, Martin" <schulzm at llnl.gov> wrote:
>>
>>> I think this could be interesting and helpful, but wouldn't this be expensive in the implementation, at least for some variables? Would we need some way to say which variables can be read atomically?
>>>
>>> Martin
>>>
>>>
>>> On Dec 3, 2013, at 2:27 PM, "Jeff Squyres (jsquyres)" <jsquyres at cisco.com>
>>> wrote:
>>>
>>>> I'm trolling through SC-accumulated emails and saw one that prompted me to ask here (I think I asked about this before, but don't remember): is there any interest in atomic vector reads of MPI_T variables?
>>>>
>>>> I ask because, especially for pvars, if you loop over reading a bunch of them, they're not atomic, and you might not get consistent values. But if you can issue a single MPI_T read for N values all at once, you have a much better chance of getting an atomic/consistent set of values.
>>>>
>>>> --
>>>> Jeff Squyres
>>>> jsquyres at cisco.com
>>>> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>>>>
>>>> _______________________________________________
>>>> mpiwg-tools mailing list
>>>> mpiwg-tools at lists.mpi-forum.org
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>>>
>>> ________________________________________________________________________
>>> Martin Schulz, schulzm at llnl.gov, http://people.llnl.gov/schulzm
>>> CASC @ Lawrence Livermore National Laboratory, Livermore, USA
>>>
>>>
>>>
>>> _______________________________________________
>>> mpiwg-tools mailing list
>>> mpiwg-tools at lists.mpi-forum.org
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>>
>>
>> --
>> Jeff Squyres
>> jsquyres at cisco.com
>> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>>
>> _______________________________________________
>> mpiwg-tools mailing list
>> mpiwg-tools at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
>
> ______________________________________________________________
> Kathryn Mohror, kathryn at llnl.gov, http://people.llnl.gov/mohror1
> CASC @ Lawrence Livermore National Laboratory, Livermore, CA, USA
>
>
>
>
> _______________________________________________
> mpiwg-tools mailing list
> mpiwg-tools at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools
--
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
More information about the mpiwg-tools
mailing list