[Mpi3-rma] RMA proposal 1 update
Jeff Hammond
jeff.science at gmail.com
Sun May 16 20:27:12 CDT 2010
Tortsten,
There seemed to be decent agreement on adding MPI_Win_all_flush_all
(equivalent to MPI_Win_flush_all called from every rank in the
communicator associated with the window) since this function can be
implemented far more efficiently as a collective than the equivalent
point-wise function calls.
Is there a problem with adding this to your proposal?
Jeff
On Sun, May 16, 2010 at 12:48 AM, Torsten Hoefler <htor at illinois.edu> wrote:
> Hello all,
>
> After the discussions at the last Forum I updated the group's first
> proposal.
>
> The proposal (one-side-2.pdf) is attached to the wiki page
> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/RmaWikiPage
>
> The changes with regards to the last version are:
>
> 1) added MPI_NOOP to MPI_Get_accumulate and MPI_Accumulate_get
>
> 2) (re)added MPI_Win_flush and MPI_Win_flush_all to passive target mode
>
> Some remarks:
>
> 1) We didn't straw-vote on MPI_Accumulate_get, so this function might
> go. The removal would be very clean.
>
> 2) Should we allow MPI_NOOP in MPI_Accumulate (this does not make sense
> and is incorrect in my current proposal)
>
> 3) Should we allow MPI_REPLACE in MPI_Get_accumulate/MPI_Accumulate_get?
> (this would make sense and is allowed in the current proposal but we
> didn't talk about it in the group)
>
>
> All the Best,
> Torsten
>
> --
> bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
> Torsten Hoefler | Research Associate
> Blue Waters Directorate | University of Illinois
> 1205 W Clark Street | Urbana, IL, 61801
> NCSA Building | +01 (217) 244-7736
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
--
Jeff Hammond
Argonne Leadership Computing Facility
jhammond at mcs.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
More information about the mpiwg-rma
mailing list