[Mpi3-rma] RMA proposal 1 update

Underwood, Keith D keith.d.underwood at intel.com
Wed May 19 20:34:35 CDT 2010


I wasn't looking for a full enumeration of the API - just the relevant synchronization calls.  I thought there might be a whopping 4 or 5 and that doing so might let us continue the discussion.

Keith

> -----Original Message-----
> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
> Sent: Wednesday, May 19, 2010 7:31 PM
> To: MPI 3.0 Remote Memory Access working group
> Cc: MPI 3.0 Remote Memory Access working group
> Subject: Re: [Mpi3-rma] RMA proposal 1 update
> 
> GA has an online manual, API documentation, numerous tutorials and
> example code, and, finally, journal papers, which cover that nicely
> already. I'll try to clarify any ambiguity that exists after those
> sources are considered.
> 
> A nicer answer is that I will do exactly that as part of something
> Torsten, myself and others will be working on shortly.
> 
> :)
> 
> Jeff
> 
> Sent from my iPhone
> 
> On May 19, 2010, at 8:25 PM, "Underwood, Keith D"
> <keith.d.underwood at intel.com
>  > wrote:
> 
> > So, perhaps enumerating the relevant GA constructs and their
> > semantics would be informative here...
> >
> > Keith
> >
> >> -----Original Message-----
> >> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> >> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
> >> Sent: Wednesday, May 19, 2010 7:23 PM
> >> To: MPI 3.0 Remote Memory Access working group
> >> Cc: MPI 3.0 Remote Memory Access working group
> >> Subject: Re: [Mpi3-rma] RMA proposal 1 update
> >>
> >> Can I mix that call with other sync mechanisms?
> >>
> >> So I implement GA by calling fence inside of GA_Create to expose the
> >> window and use fence+barrier for GA_Sync, but can I mix in lock and
> >> unlock as well as the forthcoming p2p flush (as I can do in GA/ARMCI
> >> now)?
> >>
> >> The standard presents three synchronization schemes. It does not
> >> suggest one can intermix them at will.
> >>
> >> Jeff
> >>
> >> Sent from my iPhone
> >>
> >> On May 19, 2010, at 2:58 PM, "Underwood, Keith D"
> >> <keith.d.underwood at intel.com
> >>> wrote:
> >>
> >>> Jeff,
> >>>
> >>> Another question for you:  If you are going to call
> >>> MPI_Win_all_flush_all, why not just use active target and call
> >>> MPI_Win_fence?
> >>>
> >>> Keith
> >>>
> >>>> -----Original Message-----
> >>>> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
> >>>> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
> >>>> Sent: Sunday, May 16, 2010 7:27 PM
> >>>> To: MPI 3.0 Remote Memory Access working group
> >>>> Subject: Re: [Mpi3-rma] RMA proposal 1 update
> >>>>
> >>>> Tortsten,
> >>>>
> >>>> There seemed to be decent agreement on adding
> MPI_Win_all_flush_all
> >>>> (equivalent to MPI_Win_flush_all called from every rank in the
> >>>> communicator associated with the window) since this function can
> be
> >>>> implemented far more efficiently as a collective than the
> >>>> equivalent
> >>>> point-wise function calls.
> >>>>
> >>>> Is there a problem with adding this to your proposal?
> >>>>
> >>>> Jeff
> >>>>
> >>>> On Sun, May 16, 2010 at 12:48 AM, Torsten Hoefler
> >> <htor at illinois.edu>
> >>>> wrote:
> >>>>> Hello all,
> >>>>>
> >>>>> After the discussions at the last Forum I updated the group's
> >>>>> first
> >>>>> proposal.
> >>>>>
> >>>>> The proposal (one-side-2.pdf) is attached to the wiki page
> >>>>> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/RmaWikiPage
> >>>>>
> >>>>> The changes with regards to the last version are:
> >>>>>
> >>>>> 1) added MPI_NOOP to MPI_Get_accumulate and MPI_Accumulate_get
> >>>>>
> >>>>> 2) (re)added MPI_Win_flush and MPI_Win_flush_all to passive
> target
> >>>> mode
> >>>>>
> >>>>> Some remarks:
> >>>>>
> >>>>> 1) We didn't straw-vote on MPI_Accumulate_get, so this function
> >>>>> might
> >>>>>  go. The removal would be very clean.
> >>>>>
> >>>>> 2) Should we allow MPI_NOOP in MPI_Accumulate (this does not make
> >>>> sense
> >>>>>  and is incorrect in my current proposal)
> >>>>>
> >>>>> 3) Should we allow MPI_REPLACE in
> >>>> MPI_Get_accumulate/MPI_Accumulate_get?
> >>>>>  (this would make sense and is allowed in the current proposal
> but
> >>>> we
> >>>>>  didn't talk about it in the group)
> >>>>>
> >>>>>
> >>>>> All the Best,
> >>>>> Torsten
> >>>>>
> >>>>> --
> >>>>> bash$ :(){ :|:&};: --------------------- http://www.unixer.de/
> >>>>> -----
> >>>>> Torsten Hoefler         | Research Associate
> >>>>> Blue Waters Directorate | University of Illinois
> >>>>> 1205 W Clark Street     | Urbana, IL, 61801
> >>>>> NCSA Building           | +01 (217) 244-7736
> >>>>> _______________________________________________
> >>>>> mpi3-rma mailing list
> >>>>> mpi3-rma at lists.mpi-forum.org
> >>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Jeff Hammond
> >>>> Argonne Leadership Computing Facility
> >>>> jhammond at mcs.anl.gov / (630) 252-5381
> >>>> http://www.linkedin.com/in/jeffhammond
> >>>>
> >>>> _______________________________________________
> >>>> mpi3-rma mailing list
> >>>> mpi3-rma at lists.mpi-forum.org
> >>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >>>
> >>> _______________________________________________
> >>> mpi3-rma mailing list
> >>> mpi3-rma at lists.mpi-forum.org
> >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >> _______________________________________________
> >> mpi3-rma mailing list
> >> mpi3-rma at lists.mpi-forum.org
> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> >
> > _______________________________________________
> > mpi3-rma mailing list
> > mpi3-rma at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma




More information about the mpiwg-rma mailing list