<html><body>
<p>One comment - the clarification should not mention the progress engine. We have a general requirement in the standard that MPI make progress and there is debate about what that requires. (Must there be an asynchronous progress engine). In general we do not attach a progress semantic to specific MPI calls.<br>
<br>
Even for something like MPI_Wait, there is no statement that MPI_Wait drives a progress engine. Only that MPI_Wait returns when the request is complete. It is not a part of MPI_Wait semantic to say that the MPI_Wait call runs a progress engine or does not run one. <br>
<br>
I assume almost every MPI implementation does run its progress engine within a blocked MPI_Wait and probably no MPI gives the progress engine a time limited kick in a MPI_Comm_rank call. Either way it is an impementation decision, not part of the standard. (Of course something like MPI_Comm_rank could not block but if an MPI implementor wanted to run the progress engine for a max of 50 microseconds on every single MPI call, the standard would not forbid it.)<br>
<br>
Dick <br>
<br>
<br>
Dick Treumann - MPI Team/TCEM <br>
IBM Systems & Technology Group<br>
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601<br>
Tele (845) 433-7846 Fax (845) 433-8363<br>
<br>
<br>
<tt>mpi-21-bounces@cs.uiuc.edu wrote on 02/06/2008 12:37:20 PM:<br>
<br>
> Dries,<br>
> <br>
> Sounds fine. Nobody raised problems with this clarification up to now.<br>
> <br>
> Please can you write a full proposal of exactly what to change/add/...<br>
> <br>
> I would put it to MPI 2.1 Ballot 4.<br>
> <br>
> Best regards<br>
> Rolf<br>
> <br>
> On Mon, 4 Feb 2008 16:24:03 +0100<br>
> Dries Kimpe <Dries.Kimpe@cs.kuleuven.be> wrote:<br>
> >>From the standard: (same for MPI_Startall, MPI_Testall)<br>
> ><br>
> >The error-free execution of MPI_WAITALL(count, array_of_requests,<br>
> >array_of_statuses) has the same effect as the execution of<br>
> >MPI_WAIT(&array_of_request[i], &array_of_statuses[i]), for i=0 ,...,<br>
> >count-1, in some arbitrary order.<br>
> ><br>
> >What about count==0? <br>
> ><br>
> >1) Is it allowed? <br>
> ><br>
> >2) If it is allowed, should a valid pointer be provided as the<br>
> >array_of_requests argument?<br>
> ><br>
> >Looking at mpich-1.0.6p1, I see the following:<br>
> ><br>
> ><br>
> >Function Allows count==0 Allows bufptr=0 when count==0<br>
> >MPI_Testall Y Y<br>
> >MPI_Testsome Y Y<br>
> >MPI_Testany Y Y<br>
> >MPI_Waitsome Y Y<br>
> >MPI_Waitall Y Y<br>
> >MPI_Waitany Y Y<br>
> >MPI_Startall Y N<br>
> ><br>
> >For MPI_Startall:<br>
> ><br>
> >Fatal error in MPI_Startall: Invalid argument, error stack:<br>
> >MPI_Startall(147): MPI_Startall(count=0, req_array=(nil)) failed<br>
> >MPI_Startall(80).: Null pointer in parameter array_of_requests[0]0:<br>
> Return code = 0, signaled with Interrupt<br>
> ><br>
> >Considering the other ...all functions allow count==0 and ignore the array<br>
> >in this case, the behaviour of MPI_Startall is probably just an oversight<br>
> >in mpich.<br>
> ><br>
> >For some reason, OpenMPI has exactly the same behavior;<br>
> >MPI_Startall doesn't allow req_array==0 even if count==0<br>
> >(MPI_ERR_REQUEST is raised/returned)<br>
> ><br>
> >There is also an issue: progress. For example, in OpenMPI, calling<br>
> >MPI_Testall with count==0 doesn't do anything at all. More specific, it<br>
> >doesn't call the progress engine. <br>
> ><br>
> >Mpich however, does call the progress engine, even if count==0.<br>
> >(On the other hand, MPI_Waitall does NOT try to make progress if count==0)<br>
> ><br>
> >I propose to clarify where needed, and explicitly allow count==0;<br>
> >Also, if count==0, then 0 should be accepted as pointer for the request<br>
> >array, and no guarantee of progress should be made.<br>
> ><br>
> >If count==0, the returned error code should be MPI_SUCCESS (unless of<br>
> >course, an asynchronous error is detected).<br>
> ><br>
> >It would suffice to add "When count is zero, this call has no effect."<br>
> >(For MPI_Startall, MPI_Testall, MPI_Waitall this can be right after the<br>
> >text quoted on top of this message; For MPI_Waitany, MPI_Testany,<br>
> >MPI_Waitsome, MPI_Testsome, the addition should be at the end of the<br>
> >paragraph.)<br>
> ><br>
> ><br>
> > Greetings,<br>
> > Dries<br>
> ><br>
> >_______________________________________________<br>
> >mpi-forum mailing list<br>
> >mpi-forum@cs.uiuc.edu<br>
> ><a href="http://lists.cs.uiuc.edu/mailman/listinfo/mpi-forum">http://lists.cs.uiuc.edu/mailman/listinfo/mpi-forum</a><br>
> <br>
> <br>
> <br>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner@hlrs.de<br>
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530<br>
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832<br>
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner<br>
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)<br>
> _______________________________________________<br>
> mpi-21 mailing list<br>
> mpi-21@cs.uiuc.edu<br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21">http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21</a><br>
</tt></body></html>