<HTML>
<HEAD>
<TITLE>Re: [mpi3-coll] Non-blocking Collectives Proposal Draft</TITLE>
</HEAD>
<BODY>
<FONT FACE="Calibri, Verdana, Helvetica, Arial"><SPAN STYLE='font-size:11pt'>Here are my comments: These are from the original draft sent out, so other may have already commented on some of these – could not download from the web on the plane :-)<BR>
Section 1.2 paragraph 2: The matching of those operations a is ruled by the order.... This is a bit confusing, as the paragraph mentions both point-to-point operations and blocking collective operations. I would suggest changing this to “The matching of those blocking collective operations is rules by the order ....”<BR>
Section 2: “High-quality ...” - In general I am against these sorts of comments, even though they are strewn throughout the standard. There are many tradeoffs in implementing a communications library, and what may be good in one instance, may not be appropriate in another. It may be more appropriate to state “The enables the application to take advantage of asynchronous progress, if the implementation implements such a capability...” While I agree with the sentiment, I disagree with the categorization.<BR>
Section 2.1: Instead of using the term nested collectives, multiple outstanding non-blocking collectives seems clearer to me.<BR>
The comment that calling MPI_Request_free() is not useful on the send side is not quite clear to me. Why is the send side different from the receive side ? In either case, I do not think that we should allow freeing a request in the middle of a collective (like one can for ptp communications)<BR>
<BR>
The paragraph on the bottom of page 3 is confusing. After mentioning that we can have multiple outstanding collectives, the last sentence seems to imply that in such a case, all would have to be of the same type (such as ibcast), which I do not think is the intent.<BR>
<BR>
Section 2.3: As I mentioned before, I do not believe we should specify that an implementation should support more than a minimum of 1 (i.e must provide support for this). Especially as the system sizes increase markedly, we need to be careful on what sort of resource requirements we place on an implementation.<BR>
<BR>
Rich<BR>
<BR>
<BR>
On 10/16/08 4:46 PM, "Torsten Hoefler" <<a href="htor@cs.indiana.edu">htor@cs.indiana.edu</a>> wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE="Calibri, Verdana, Helvetica, Arial"><SPAN STYLE='font-size:11pt'>Hello Alexander,<BR>
> Generally, some sentences start with lower case letters that should be<BR>
> capitalized.<BR>
yes, I fixed some but will re-read it later this week.<BR>
<BR>
> Page 3, top. We probably should not explain here how to ensure progress<BR>
> for nonblocking ops. This matter should be covered by the general<BR>
> progress description elsewhere.<BR>
I agree, I deleted it (it's already in the nonblocking point-to-point<BR>
chapter).<BR>
<BR>
> Page 3, lower half. Look how progress is defined in this list by<BR>
> referring to the pt2pt progress rules.<BR>
what do you mean with that? I would just reference the current MPI<BR>
definition - NBCs seem to fit this model nicely.<BR>
<BR>
> Ibid. There are already 3 classes of requests: pt2pt, generalized, and<BR>
> file I/O ones.<BR>
yes<BR>
<BR>
> Page 3, bottom. How many nonblocking ops can overlap with a blocking<BR>
> one? I.e., can one have the following situation:<BR>
><BR>
> Process 0 Process 1<BR>
><BR>
> MPI_Ibarrier(req) MPI_Ibarrier(req)<BR>
> MPI_Wait(req)<BR>
> MPI Bcast MPI_Bcast<BR>
> MPI_Wait(req)<BR>
yes, this is a legal code and should work. Any objections?<BR>
<BR>
> Page 5, bottom. 32767 simultaneous ops to support is a tall order. I'd<BR>
> say, 1 (one) would be a good lower limit, thus allowing no overlap. Or<BR>
> even 0 (zero), meaning no nonblocking collective support at all. The<BR>
> rest of this passage would probably be better reformulated as an advice<BR>
> to implementors.<BR>
yes it is - I'm very indicisive. We have to discuss this at the Forum in<BR>
Chicago.<BR>
<BR>
> In the margin: editing a PDF requires creativity (exemplified by the<BR>
> earlier comments) or a pretty expensive Acrobat. Maybe we can find some<BR>
> other way of distributing and commenting on our drafts?<BR>
oh yes, I fully agree. But the final document should be in MPI style, so<BR>
I thought I should start with LaTeX. Do you have any suggestions? Wiki<BR>
seems suboptimal because we would have to re-format everything (which<BR>
might be not too bad). LaTeX source in svn would be the best, but does<BR>
everybody have LaTeX? I'm open for any suggestion!<BR>
<BR>
For now, I uploaded the changed version to the wikipage (to avoid<BR>
attachments on the mailinglist):<BR>
<a href="https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/NBColl">https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/NBColl</a><BR>
<BR>
Thanks for the comments!<BR>
<BR>
Best,<BR>
Torsten<BR>
<BR>
--<BR>
bash$ :(){ :|:&};: --------------------- <a href="http://www.unixer.de/">http://www.unixer.de/</a> -----<BR>
Torsten Hoefler | Postdoctoral Researcher<BR>
Open Systems Lab | Indiana University <BR>
150 S. Woodlawn Ave. | Bloomington, IN, 474045, USA<BR>
Lindley Hall Room 135 | +01 (812) 855-3608<BR>
_______________________________________________<BR>
mpi3-coll mailing list<BR>
<a href="mpi3-coll@lists.mpi-forum.org">mpi3-coll@lists.mpi-forum.org</a><BR>
<a href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-coll">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-coll</a><BR>
<BR>
</SPAN></FONT></BLOCKQUOTE>
</BODY>
</HTML>