[mpi3-coll] NBC Draft Revision 5
Bronis R. de Supinski
bronis at llnl.gov
Wed Feb 25 15:39:42 CST 2009
I prefer option 2.
On Wed, 25 Feb 2009, Adam Moody wrote:
> Hello all,
> I opened a ticket to add the text to the collective intro regarding
> access restrictions on MPI_IN_PLACE like we have in the NBC proposal:
> https:// svn.mpi-forum.org/trac/mpi-forum-web/ticket/131
> Erez reviewed it and pointed out that it's not really necessary. All
> collectives specify MPI_IN_PLACE should be applied to the receive buffer
> which has the stronger constraint, except for the root in scatter and
> scatterv which does not receive anything. Then, after I looked over
> things again, I realized that my current statements are actually more
> restrictive than they need to be for scatter{v}.
>
> "When using the "in place" option, message buffers function as both
> send and receive buffers. Such buffers should not be modified or
> accessed until the operation completes."
>
> Since nothing is received at the root in scatter{v}, it could in fact
> issue several NBC scatter{v} calls on different communicators (reusing
> an active send buffer). To fix this, we have two options;
> 1) Change "When using the 'in place' option, message buffers
> function" to "When using the 'in place' option, receive message
> buffers may function"
> 2) Close ticket 131 and strike the similar lines in the NBC
> proposal -- just rely on the implicit send/receive buffer constraints
> I'm in favor of option #2 myself, since I'm not sure how much info #1 adds.
>
> If we go with option #2, it would leave a single-sentence paragraph on
> page 51, line 11 in the NBCv5 text. We could append this sentence to
> the paragraph above it.
>
> Any opinions?
> -Adam
>
> Torsten Hoefler wrote:
>
> >Hello workgroup,
> >I just posted revision 5 of the NBC draft to ticket #109.
> >
> >The draft is also available at [1] and a diff to revision 4 at [2].
> >
> >[1]: http:// www. unixer.de/sec/nbc-proposal-rev-5.pdf
> >[2]: http:// www. unixer.de/sec/nbc-proposal-rev-5.diff
> >
> >Please review and comment!
> >
> >All the Best,
> > Torsten
> >
> >
> >
> _______________________________________________
> mpi3-coll mailing list
> mpi3-coll at lists.mpi-forum.org
> http:// lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-coll
>
>
More information about the mpiwg-coll
mailing list