[mpiwg-tools] Using MPI_T

Anh Vo Anh.Vo at microsoft.com
Thu Oct 24 15:42:42 CDT 2013


I am also not aware of any applications/tools doing such things yet. But that's an example of how MPI_T might benefit the developers of those applications/tools. But right now I don't of any commercial users of MPI_T

--Anh

From: mpiwg-tools [mailto:mpiwg-tools-bounces at lists.mpi-forum.org] On Behalf Of Junchao Zhang
Sent: Thursday, October 24, 2013 1:41 PM
To: <mpiwg-tools at lists.mpi-forum.org>
Subject: Re: [mpiwg-tools] Using MPI_T

OK. I believe it is an advanced topic. I'm not aware of applications doing such cool things.
If you happen to know an application that would benefit from MPI_T, I would like to implement it.

--Junchao Zhang

On Thu, Oct 24, 2013 at 3:30 PM, Anh Vo <Anh.Vo at microsoft.com<mailto:Anh.Vo at microsoft.com>> wrote:
I would say it depends on the situation. In most cases I would imagine the applications/tools would do the aggregation. And yes, in my example the processes need to communicate to know the message pressure

--Anh

From: mpiwg-tools [mailto:mpiwg-tools-bounces at lists.mpi-forum.org<mailto:mpiwg-tools-bounces at lists.mpi-forum.org>] On Behalf Of Junchao Zhang
Sent: Thursday, October 24, 2013 1:27 PM
To: <mpiwg-tools at lists.mpi-forum.org<mailto:mpiwg-tools at lists.mpi-forum.org>>
Subject: Re: [mpiwg-tools] Using MPI_T

Hi, Anh,
  I think your example is to use feedback to do throttling.
  A further question is: should we do it at application level (since you mentioned aggregation) or do it in MPI runtime?
  The example also implies processes need to communicate to know pressure of each other.
  Thanks.

--Junchao Zhang

On Thu, Oct 24, 2013 at 2:40 PM, Anh Vo <Anh.Vo at microsoft.com<mailto:Anh.Vo at microsoft.com>> wrote:
Hi Junchao,
One example is monitoring the length of the unexpected message queues. Basically, when an MPI process receives an incoming message from another MPI process and it has not posted a receive for such message yet, the message is typically copied into an unexpected receive queues. When the process posts a receive, it loops through the unexpected queue and sees whether any of the messages in the queue would match with this receive. If the unexpected queue is too long, you would spend a lot of time looping through the queue.  Extra memcpy operations are also needed for unexpected receive (vs the case where the message arrives and there's already a posted receive for it)

By monitoring the length of the unexpected receive queue, the user can adjust the rate of message flow. For example, if the other side processes messages fast enough, you can keep sending lots of small data (such as heart beat or piggyback), but in the case where the other side is slow processing messages (thus end up with high queue depth for unexpected queue), it might be more beneficial to compress the message or aggregate them before sending

--Anh

From: mpiwg-tools [mailto:mpiwg-tools-bounces at lists.mpi-forum.org<mailto:mpiwg-tools-bounces at lists.mpi-forum.org>] On Behalf Of Junchao Zhang
Sent: Thursday, October 24, 2013 12:31 PM
To: <mpiwg-tools at lists.mpi-forum.org<mailto:mpiwg-tools at lists.mpi-forum.org>>
Subject: [mpiwg-tools] Using MPI_T

Hello,
  The standard talks about the motivation of MPI_T as "MPI implementations often use internal variables to control their operation and performance. Understanding and manipulating these variables can provide a more efficient execution environment or improve performance for many applications."
  I could imagine that through performance variables, users can know MPI internal states during application execution. But how to use that to improve performance? What EXTRA advantages does MPI_T bring? I don't get the idea.
  Can someone shed light on that?
  Thank you.
--Junchao Zhang

_______________________________________________
mpiwg-tools mailing list
mpiwg-tools at lists.mpi-forum.org<mailto:mpiwg-tools at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools


_______________________________________________
mpiwg-tools mailing list
mpiwg-tools at lists.mpi-forum.org<mailto:mpiwg-tools at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-tools

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-tools/attachments/20131024/160f0599/attachment-0001.html>


More information about the mpiwg-tools mailing list