<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
</style>
</head>
<body class='hmmessage'>
Hoping not to sound like a fool who preaches to the choir (portals 4.0, denial?) but still:<div><br></div><div>we are moving forward, towards truly stellar performance. You look at one-sided to achieve this, for a good reason. A lot is changing in application space. As applications move forward to and beyond 100k processors, the complexity of coding for expected receives will become harder. Applications that think of limited receivers (small group of expected senders) and never do collective communication, they are a different category. They certainly have many merits but I don't want to think of them when designing large systems.</div><div><br></div><div>Also, many applications are restarting their thought process for petascale and beyond. New applications are being designed and more of them are using and looking at GAS, hence one-sided, model.</div><div><br></div><div>I don't want to be the culprit who starts the manycore discussion too, but manycore naturally and fundamentally fit PGAS because one may no longer want to have a core\process level address space but a socket level address space in the OS to make it easier to run programs. </div><div><br></div><br><hr id="stopSpelling">From: keith.d.underwood@intel.com<br>To: mpi-forum@lists.mpi-forum.org<br>Date: Mon, 27 Apr 2009 07:14:12 -0600<br>Subject: Re: [Mpi-forum] MPI One-Sided Communication<br><br>
<style>
.ExternalClass .EC_hmmessage P
{padding:0px;}
.ExternalClass body.EC_hmmessage
{font-size:10pt;font-family:Verdana;}
</style>
<font size="2" color="navy" face="Arial">
For many (most?) science and engineering codes, the trade-off between matching for two sided and synchronization for one sided is at best a wash, and often would fall in favor of two sided. There is, of course, an exception....<br><br>If one sided delivered truly stellar performance in terms of message rate and latency, you could (in some cases) eliminate the cost of the copy at the sender that is tyically done to send long messages to cover the overhead. The hardware to deliver that level of performance is truly rare... <br><br>Keith<br></font><BR>
<BR><hr size="2" width="100%" align="center">
<font face="Tahoma" size="2">
<b>From</b>: mpi-forum-bounces@lists.mpi-forum.org <mpi-forum-bounces@lists.mpi-forum.org>
<br><b>To</b>: mpi-forum@lists.mpi-forum.org <mpi-forum@lists.mpi-forum.org>
<br><b>Sent</b>: Mon Apr 27 06:02:29 2009<br><b>Subject</b>: Re: [Mpi-forum] MPI One-Sided Communication
<br></font><BR>
<div><br></div><div>Don't forget matching. The model depends on a relation between send and receive. This is the fundamental reason for potential difference in overlap. If you talk implementation, which we technically shouldn't for this argument, eventually the fact that a matching receiver is required for a send does impact over multiple sends. <br></div><div><br></div><div>the one-sided model</div><div><br></div><div>a->b </div><div><br></div><div>is independent of b. </div><div><br></div><div>the two sided model</div><div><br></div><div>a<->b</div><div><br></div><div>because of its dependance implies validity -- you can hide the cost of validity but can't eliminate it. </div><div><br></div><div>Vinod.</div><div><br>> From: keith.d.underwood@intel.com<br>> To: mpi-forum@lists.mpi-forum.org<br>> Date: Mon, 27 Apr 2009 06:48:57 -0600<br>> Subject: Re: [Mpi-forum] MPI One-Sided Communication<br>> <br>> <br>>>On the Earth Simulator, there are/were several application codes which are<br>>>using one-sided communication (instead of 2-sided). They used one-sided<br>>>communication especially to overlap communication and computation.<br>>>When I remember correctly, at least one of this applications won a Gordon<br>>>Bell Award of SC.<br>> <br>> The ambiguity of the progress rule notwithstanding, there is no particular reason that one-sided should give you better overlap than two-sided. If this is the reason that people use one-sided, maybe we should revisit the progress rule ;-)<br>> <br>> Keith<br>> <br>> _______________________________________________<br>> mpi-forum mailing list<br>> mpi-forum@lists.mpi-forum.org<br>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum<br></div></body>
</html>