[Mpi3-rma] RMA proposal 1 update
Underwood, Keith D
keith.d.underwood at intel.com
Wed May 19 14:38:43 CDT 2010
Yes, it can be worse. See later emails in the thread...
Keith
From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-bounces at lists.mpi-forum.org] On Behalf Of Vinod tipparaju
Sent: Wednesday, May 19, 2010 1:34 PM
To: MPI 3.0 Remote Memory Access working group
Subject: Re: [Mpi3-rma] RMA proposal 1 update
I am very much for supporting collective remote completion. Many optimizations are possible here.
>The question then turns to the "other networks". If you can't figure out remote completion, then the collective is going to be pretty heavy, right?
May be this will help make the point. When there is a network that does support a collective remote completion semantic (say, using its collective network reduce operation), would this functionality not help get better performance on that network? Would this functionality, for n collective completions, ever be worse than completion_1+completion_2+completion_3+...completion_n? If not, why not have it?
> From: keith.d.underwood at intel.com
> To: mpi3-rma at lists.mpi-forum.org
> Date: Sun, 16 May 2010 20:32:36 -0600
> Subject: Re: [Mpi3-rma] RMA proposal 1 update
>
> Before doing that, can someone sketch out the platform/API and the implementation that makes that more efficient? There is no gain for Portals (3 or 4). There is no gain for anything that supports Cray SHMEM reasonably well (shmem_quiet() is approximately the same semantics as MPI_flush_all). Hrm, you can probably say the same thing about anything that supports UPC well - a strict access is basically a MPI_flush_all(); MPI_Put(); MPI_flush_all();... Also, I thought somebody said that IB gave you a notification of remote completion...
>
> The question then turns to the "other networks". If you can't figure out remote completion, then the collective is going to be pretty heavy, right?
>
> Keith
>
> > -----Original Message-----
> > From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> > bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
> > Sent: Sunday, May 16, 2010 7:27 PM
> > To: MPI 3.0 Remote Memory Access working group
> > Subject: Re: [Mpi3-rma] RMA proposal 1 update
> >
> > Tortsten,
> >
> > There seemed to be decent agreement on adding MPI_Win_all_flush_all
> > (equivalent to MPI_Win_flush_all called from every rank in the
> > communicator associated with the window) since this function can be
> > implemented far more efficiently as a collective than the equivalent
> > point-wise function calls.
> >
> > Is there a problem with adding this to your proposal?
> >
> > Jeff
> >
> > On Sun, May 16, 2010 at 12:48 AM, Torsten Hoefler <htor at illinois.edu>
> > wrote:
> > > Hello all,
> > >
> > > After the discussions at the last Forum I updated the group's first
> > > proposal.
> > >
> > > The proposal (one-side-2.pdf) is attached to the wiki page
> > > https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/RmaWikiPage
> > >
> > > The changes with regards to the last version are:
> > >
> > > 1) added MPI_NOOP to MPI_Get_accumulate and MPI_Accumulate_get
> > >
> > > 2) (re)added MPI_Win_flush and MPI_Win_flush_all to passive target
> > mode
> > >
> > > Some remarks:
> > >
> > > 1) We didn't straw-vote on MPI_Accumulate_get, so this function might
> > > go. The removal would be very clean.
> > >
> > > 2) Should we allow MPI_NOOP in MPI_Accumulate (this does not make
> > sense
> > > and is incorrect in my current proposal)
> > >
> > > 3) Should we allow MPI_REPLACE in
> > MPI_Get_accumulate/MPI_Accumulate_get?
> > > (this would make sense and is allowed in the current proposal but
> > we
> > > didn't talk about it in the group)
> > >
> > >
> > > All the Best,
> > > Torsten
> > >
> > > --
> > > bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
> > > Torsten Hoefler | Research Associate
> > > Blue Waters Directorate | University of Illinois
> > > 1205 W Clark Street | Urbana, IL, 61801
> > > NCSA Building | +01 (217) 244-7736
> > > _______________________________________________
> > > mpi3-rma mailing list
> > > mpi3-rma at lists.mpi-forum.org
> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> > >
> >
> >
> >
> > --
> > Jeff Hammond
> > Argonne Leadership Computing Facility
> > jhammond at mcs.anl.gov / (630) 252-5381
> > http://www.linkedin.com/in/jeffhammond
> >
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20100519/aba19317/attachment-0001.html>
More information about the mpiwg-rma
mailing list