[mpi3-coll] Non-blocking Collectives Proposal Draft

Supalov, Alexander alexander.supalov at intel.com
Thu Oct 16 16:00:39 CDT 2008

Dear Torsten,

Thank you. The pt2pt progress comment below was taken into account when
you removed the nonblocking collective progress description.

As for the example, if it is legal, we should probably count the
blocking operations as well when we decide how many simultaneous ops are
to be supported.

Why "if" above: I'm not sure what a delayed nonblocking barrier would
mean for process 1. Won't one process possibly block somewhere? Can you
explain this please?

The format matter is complicated. I think it is worth discussing at the
Forum as we're likely to have more and more proposals approaching the
final stage, when editing them in PDF format will become an issue.

Best regards.


-----Original Message-----
From: mpi3-coll-bounces at lists.mpi-forum.org
[mailto:mpi3-coll-bounces at lists.mpi-forum.org] On Behalf Of Torsten
Sent: Thursday, October 16, 2008 10:46 PM
To: MPI-3 Collective Subgroup Discussions
Subject: Re: [mpi3-coll] Non-blocking Collectives Proposal Draft

Hello Alexander,
> Generally, some sentences start with lower case letters that should be
> capitalized.
yes, I fixed some but will re-read it later this week.

> Page 3, top. We probably should not explain here how to ensure
> for nonblocking ops. This matter should be covered by the general
> progress description elsewhere.
I agree, I deleted it (it's already in the nonblocking point-to-point

> Page 3, lower half. Look how progress is defined in this list by
> referring to the pt2pt progress rules.
what do you mean with that? I would just reference the current MPI
definition - NBCs seem to fit this model nicely.

> Ibid. There are already 3 classes of requests: pt2pt, generalized, and
> file I/O ones.

> Page 3, bottom. How many nonblocking ops can overlap with a blocking
> one? I.e., can one have the following situation:
> Process 0			Process 1
> MPI_Ibarrier(req)		MPI_Ibarrier(req)
> MPI_Wait(req)
> MPI Bcast			MPI_Bcast
> 				MPI_Wait(req)
yes, this is a legal code and should work. Any objections?

> Page 5, bottom. 32767 simultaneous ops to support is a tall order. I'd
> say, 1 (one) would be a good lower limit, thus allowing no overlap. Or
> even 0 (zero), meaning no nonblocking collective support at all. The
> rest of this passage would probably be better reformulated as an
> to implementors.
yes it is - I'm very indicisive. We have to discuss this at the Forum in

> In the margin: editing a PDF requires creativity (exemplified by the
> earlier comments) or a pretty expensive Acrobat. Maybe we can find
> other way of distributing and commenting on our drafts?
oh yes, I fully agree. But the final document should be in MPI style, so
I thought I should start with LaTeX. Do you have any suggestions? Wiki
seems suboptimal because we would have to re-format everything (which
might be not too bad). LaTeX source in svn would be the best, but does
everybody have LaTeX? I'm open for any suggestion!

For now, I uploaded the changed version to the wikipage (to avoid
attachments on the mailinglist):

Thanks for the comments!


 bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
Torsten Hoefler       | Postdoctoral Researcher
Open Systems Lab      | Indiana University    
150 S. Woodlawn Ave.  | Bloomington, IN, 474045, USA
Lindley Hall Room 135 | +01 (812) 855-3608
mpi3-coll mailing list
mpi3-coll at lists.mpi-forum.org
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

More information about the mpiwg-coll mailing list