<html><body>
<p>BINGO Jeff<br>
<br>
We might also remove the datatype argument and twin count arguments from <tt>MPI_RMA_Raw_xfer</tt> just to eliminate the expectation that basic put/get do datatype conversions when origin and target are on heterogeneous nodes. There would be a single "count" argument and it represents the number of contiguous bytes to be transferred.<br>
<br>
The assertion would be that there is no use of complex RMA. It would give the implementation the option to leave its software agent dormant. Note that having this assertion as an option for MPI_Init_asserted does not allow an MPI implementation to avoid having an agent available. An application that does not use the assertion can count on the agent being ready for any call to "full baked" RMA.<br>
<br>
Dick<br>
<br>
Dick Treumann - MPI Team <br>
IBM Systems & Technology Group<br>
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601<br>
Tele (845) 433-7846 Fax (845) 433-8363<br>
<br>
<br>
<tt>mpi3-rma-bounces@lists.mpi-forum.org wrote on 09/16/2009 03:43:15 PM:<br>
<br>
> [image removed] </tt><br>
<tt>> <br>
> Re: [Mpi3-rma] non-contiguous support in RMA & one-sided pack/unpack (?)</tt><br>
<tt>> <br>
> Jeff Hammond </tt><br>
<tt>> <br>
> to:</tt><br>
<tt>> <br>
> MPI 3.0 Remote Memory Access working group</tt><br>
<tt>> <br>
> 09/16/2009 03:44 PM</tt><br>
<tt>> <br>
> Sent by:</tt><br>
<tt>> <br>
> mpi3-rma-bounces@lists.mpi-forum.org</tt><br>
<tt>> <br>
> Please respond to "MPI 3.0 Remote Memory Access working group" </tt><br>
<tt>> <br>
> I think that there is a need for two interfaces; one which is a<br>
> portable interface to the low-level truly one-sided bulk transfer<br>
> operation and another which is completely general and is permitted to<br>
> do operations which require remote agency.<br>
> <br>
> For example, I am aware of no NIC which can do accumulate on its own,<br>
> hence RMA_ACC_SUM and related operations require remote agency, and<br>
> thus this category of RMA operations are not truly one-sided.<br>
> <br>
> Thus the standard might support two xfer calls:<br>
> <br>
> MPI_RMA_Raw_xfer(origin_addr, origin_count, origin_datatype,<br>
> target_mem, target_disp, target_count , target_rank, request)<br>
> <br>
> which is exclusively for transferring contiguous bytes from one place<br>
> to another, i.e. does raw put/get only, and the second, which has been<br>
> described already, which handles the general case, including<br>
> accumulation, non-contiguous and other complex operations.<br>
> <br>
> The distinction over remote agency is extremely important from a<br>
> implementation perspective since contiguous put/get operations can be<br>
> performed in a fully asynchronous non-interrupting way with a variety<br>
> of interconnects, and thus exposing this procedure in the MPI standard<br>
> will allow for very efficient implementations on some systems. It<br>
> should also encourage MPI users to think about their RMA needs and how<br>
> they might restructure their code to take advantage of the faster<br>
> flavor of xfer when doing so requires little modification.<br>
> <br>
> Jeff<br>
> <br>
> On Wed, Sep 16, 2009 at 1:49 PM, Vinod tipparaju <br>
> <tipparajuv@hotmail.com> wrote:<br>
> >>My argument is that any RMA depends on a call at the origin being able to<br>
> >> trigger activity at the target. Modern RMA hardware has the hooksto do the<br>
> >> remote side of MPI_Fast_RMA_xfer() efficiently based on a call at the<br>
> >> origin. Because these hooks are in the hardware they are simply there. They<br>
> >> do not use the CPU or hurt performance of things that do use the CPU.<br>
> ><br>
> > I read this as an argument that says two interfaces are not necessary.<br>
> > Having application author promise (during init) it will not do anything that<br>
> > needs an agent is certainly useful. Particularly when, as you state, "having<br>
> > this agent standing by hurts general performance".<br>
> > The things that potentially cannot be done without an agent (technically,<br>
> > everything but atomics could be done with out need for any agents)are users<br>
> > choice through explicit usage. Users choses these attributes being aware of<br>
> > their cost hence they can indicate that they will not use them ahead of time<br>
> > when they don't use them.<br>
> > I have repeatedly considered dropping the atomicity attribute, I am unable<br>
> > to because it makes programming (and thinking) so much easier for many<br>
> > applications.<br>
> > Vinod.<br>
> ><br>
> ><br>
> > ________________________________<br>
> > To: mpi3-rma@lists.mpi-forum.org<br>
> > From: treumann@us.ibm.com<br>
> > Date: Wed, 16 Sep 2009 14:18:15 -0400<br>
> > Subject: Re: [Mpi3-rma] non-contiguous support in RMA & one-sided<br>
> > pack/unpack (?)<br>
> ><br>
> > The assertion could then be: MPI_NO_SLOW_RMA (also a bit tongue in cheek)<br>
> ><br>
> > My argument is that any RMA depends on a call at the origin being able to<br>
> > trigger activity at the target. Modern RMA hardware has the hooks to do the<br>
> > remote side of MPI_Fast_RMA_xfer() efficiently based on a call at the<br>
> > origin. Because these hooks are in the hardware they are simply there. They<br>
> > do not use the CPU or hurt performance of things that do use the CPU.<br>
> ><br>
> > RMA hardware may not have the hooks to do the target side of any arbitrary<br>
> > MPI_Slow_RMA_xfer(). As a result, support for the more complex RMA_xfer may<br>
> > require a wake-able software agent (thread maybe) to be standing by at all<br>
> > tasks just because they may become target of a Slow_RMA_xfer.<br>
> ><br>
> > If having this agent standing by hurts general performance of MPI<br>
> > applications that will never make a call to Slow_RMA_xfer, why not let the<br>
> > applications author promise up front "I have no need of this agent."<br>
> ><br>
> > An MPI implementation that can support Slow_RMA_xfer with no extra costs<br>
> > (send/recv latency, memory, packet interrupts, CPU contention) will simply<br>
> > ignore the assertion.<br>
> ><br>
> > BTW - I just took a look at the broad proposal and it may contain several<br>
> > things that cannot be done without a wake-able remote software agent. That<br>
> > argues for Keith's idea of an RMA operation which closely matches what RMA<br>
> > hardware does and a second one that brings along all the bells andwhistles.<br>
> > Maybe the assertion for an application that only uses the basic RMA call or<br>
> > uses no RMA at all could be MPI_NO_KITCHEN_SINK (even more tongue in cheek).<br>
> ><br>
> > Dick<br>
> ><br>
> ><br>
> > Dick Treumann - MPI Team<br>
> > IBM Systems & Technology Group<br>
> > Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601<br>
> > Tele (845) 433-7846 Fax (845) 433-8363<br>
> ><br>
> ><br>
> > mpi3-rma-bounces@lists.mpi-forum.org wrote on 09/16/2009 01:08:51 PM:<br>
> ><br>
> >> [image removed]<br>
> >><br>
> >> Re: [Mpi3-rma] non-contiguous support in RMA & one-sided pack/unpack (?)<br>
> >><br>
> >> Underwood, Keith D<br>
> >><br>
> >> to:<br>
> >><br>
> >> MPI 3.0 Remote Memory Access working group<br>
> >><br>
> >> 09/16/2009 01:09 PM<br>
> >><br>
> >> Sent by:<br>
> >><br>
> >> mpi3-rma-bounces@lists.mpi-forum.org<br>
> >><br>
> >> Please respond to "MPI 3.0 Remote Memory Access working group"<br>
> >><br>
> >> But, going back to Bill’s point: performance across a range of<br>
> >> platforms is key. While you can’t have a function for every usage<br>
> >> (well, you can, but it would get cumbersome at some point), it may<br>
> >> be important to have a few levels of specialization in the API.<br>
> >> E.g. you could have two variants:<br>
> >><br>
> >> MPI_Fast_RMA_xfer(): no data types, no communicators, etc.<br>
> >> MPI_Slow_RMA_xfer(): include the kitchen sink.<br>
> >><br>
> >> Yes, the naming is a little tongue in cheek ;-)<br>
> >><br>
> >> Keith<br>
> >><br>
> >> <snip><br>
> ><br>
> > _______________________________________________<br>
> > mpi3-rma mailing list<br>
> > mpi3-rma@lists.mpi-forum.org<br>
> > <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma</a><br>
> ><br>
> ><br>
> <br>
> <br>
> <br>
> -- <br>
> Jeff Hammond<br>
> Argonne Leadership Computing Facility<br>
> jhammond@mcs.anl.gov / (630) 252-5381<br>
> <a href="http://www.linkedin.com/in/jeffhammond">http://www.linkedin.com/in/jeffhammond</a><br>
> <a href="http://home.uchicago.edu/~jhammond/">http://home.uchicago.edu/~jhammond/</a><br>
> <br>
> _______________________________________________<br>
> mpi3-rma mailing list<br>
> mpi3-rma@lists.mpi-forum.org<br>
> <a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma</a><br>
</tt></body></html>