[Mpi3-rma] request-based ops
Jim Dinan
james.dinan at gmail.com
Fri Jun 14 12:23:38 CDT 2013
Jeff,
How would iwin_flush be different in practice from doing nothing? I can
give you a trivial implementation that does nothing until you wait on the
request, at which point I call blocking win_flush. With my implementor hat
on, I would view this function as a performance hint that indicates to the
implementation that you will want to complete all operations thus far and
none after. Given a high quality implementation that actually makes
background progress on one-sided communications, this should be unnecessary
and I would probably implement it as above. If you want to only complete
specific operations, the request generating functions are available.
~Jim.
On Fri, Jun 14, 2013 at 11:09 AM, Jeff Hammond <jhammond at alcf.anl.gov>wrote:
> Yes, I was making a disingenuous argument to push back against the
> fallacy that iwin_flush is not required.
>
> Jeff
>
> On Fri, Jun 14, 2013 at 10:25 AM, Jim Dinan <james.dinan at gmail.com> wrote:
> > Hey now, blocking doesn't mean that MPI /must/ block, it only means that
> it
> > /can/ block. Blocking and synchronizing are certainly not the same;
> Ssend
> > synchronizes, which is a stronger semantic than blocking.
> >
> > ~Jim.
> >
> >
> > On Wed, Jun 12, 2013 at 5:07 PM, Jeff Hammond <jhammond at alcf.anl.gov>
> wrote:
> >>
> >> Sure, let's separate these issues. I think Pirate RMA (RRPUT,
> >> RRACCUMULATE and RRRGET_ACCUMULATE) is a more _local_ solution in the
> >> MPI standard sense :-)
> >>
> >> SEND is also a nonblocking operation because the implementation can
> >> always buffer the data and return before the remote RECV is posted.
> >> Only SSEND is blocking if one asserts that blocking is equivalent to
> >> "requires remote agency to complete".
> >>
> >> So basically, I completely agree with the Forum back-pushers that
> >> IWIN_FLUSH etc. are as useful as ISEND.
> >>
> >> Jeff
> >>
> >> On Wed, Jun 12, 2013 at 4:58 PM, Pavan Balaji <balaji at mcs.anl.gov>
> wrote:
> >> >
> >> > The push-back at the Forum was that WIN_FLUSH and WIN_UNLOCK are local
> >> > operations (in MPI's definition of local), so they are already
> >> > nonblocking.
> >> > I was not at the Forum. I pointed out to Wesley after he returned
> that
> >> > they
> >> > are only local in the simple case where no one else is holding the
> lock.
> >> > Otherwise they are not.
> >> >
> >> > Further, the proposal is not only about WIN_FLUSH and WIN_UNLOCK. It
> >> > was
> >> > for a long list of blocking functions: COMM_ACCEPT, COMM_CONNECT,
> >> > WIN_FENCE,
> >> > WIN_COMPLETE, WIN_WAIT, WIN_UNLOCK, WIN_FLUSH, WIN_FLUSH_LOCAL,
> >> > WIN_FLUSH_ALL, WIN_FLUSH_LOCAL_ALL, ... The point was that MPI should
> >> > deal
> >> > with concepts, not specific function exceptions. Nonblocking
> >> > communication
> >> > is one such concept, and it should be valid for all operations (that
> are
> >> > not
> >> > local).
> >> >
> >> > FWIW, my proposal was only to add nonblocking variants to blocking
> >> > calls.
> >> > Adding extra per-op remote completion requests is a separate issue and
> >> > should not be mixed with this. I personally think it's useful (in
> fact,
> >> > my
> >> > original proposal for RPUT/RGET equivalents had it), but that's still
> a
> >> > separate issue.
> >> >
> >> > -- Pavan
> >> >
> >> >
> >> > On 06/12/2013 09:41 AM, Jeff Hammond wrote:
> >> >>
> >> >> So Wes was telling me about IWIN_FLUSH yesterday (this function is a
> >> >> great idea, btw) and the alternative of adding double-request RPUT
> and
> >> >> RACCUMULATE that provide a request for both local and remote
> >> >> completion. By induction, this would imply a triple-request
> >> >> RGET_ACCUMULATE operation, which I also think is a good idea. It's
> >> >> really silly to have to wait for the result buffer to be ready to
> >> >> reuse the origin buffer in RGET_ACCUMULATE. It seems that many
> >> >> implementations will be able to send the entire origin buffer a
> >> >> nontrivial amount of time before the result comes back, particularly
> >> >> if the remote side doesn't have hardware-based progress and the data
> >> >> has to wait in a network buffer.
> >> >>
> >> >> I'm sorry that I missed the last day of the Forum in San Jose. I
> >> >> didn't know these operations were on the table. They sound like a
> >> >> great idea to me.
> >> >>
> >> >> Jeff
> >> >>
> >> >
> >> > --
> >> > Pavan Balaji
> >> > http://www.mcs.anl.gov/~balaji
> >>
> >>
> >>
> >> --
> >> Jeff Hammond
> >> Argonne Leadership Computing Facility
> >> University of Chicago Computation Institute
> >> jhammond at alcf.anl.gov / (630) 252-5381
> >> http://www.linkedin.com/in/jeffhammond
> >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> >> ALCF docs: http://www.alcf.anl.gov/user-guides
> >> _______________________________________________
> >> mpi3-rma mailing list
> >> mpi3-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> >
> >
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
>
>
> --
> Jeff Hammond
> Argonne Leadership Computing Facility
> University of Chicago Computation Institute
> jhammond at alcf.anl.gov / (630) 252-5381
> http://www.linkedin.com/in/jeffhammond
> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
> ALCF docs: http://www.alcf.anl.gov/user-guides
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20130614/4536ae30/attachment-0001.html>
More information about the mpiwg-rma
mailing list