[mpi-21] Ballot 4 - Re: [mpi-forum] MPI_Startall / MPI_Waitall / MPI_Testall clarification?
Richard Treumann
treumann at [hidden]
Wed Feb 6 12:22:51 CST 2008
One comment - the clarification should not mention the progress engine. We
have a general requirement in the standard that MPI make progress and
there is debate about what that requires. (Must there be an asynchronous
progress engine). In general we do not attach a progress semantic to
specific MPI calls.
Even for something like MPI_Wait, there is no statement that MPI_Wait
drives a progress engine. Only that MPI_Wait returns when the request is
complete. It is not a part of MPI_Wait semantic to say that the MPI_Wait
call runs a progress engine or does not run one.
I assume almost every MPI implementation does run its progress engine
within a blocked MPI_Wait and probably no MPI gives the progress engine a
time limited kick in a MPI_Comm_rank call. Either way it is an
impementation decision, not part of the standard. (Of course something
like MPI_Comm_rank could not block but if an MPI implementor wanted to run
the progress engine for a max of 50 microseconds on every single MPI call,
the standard would not forbid it.)
Dick
Dick Treumann - MPI Team/TCEM
IBM Systems & Technology Group
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
mpi-21-bounces_at_[hidden] wrote on 02/06/2008 12:37:20 PM:
> Dries,
>
> Sounds fine. Nobody raised problems with this clarification up to now.
>
> Please can you write a full proposal of exactly what to change/add/...
>
> I would put it to MPI 2.1 Ballot 4.
>
> Best regards
> Rolf
>
> On Mon, 4 Feb 2008 16:24:03 +0100
> Dries Kimpe <Dries.Kimpe_at_[hidden]> wrote:
> >>From the standard: (same for MPI_Startall, MPI_Testall)
> >
> >The error-free execution of MPI_WAITALL(count, array_of_requests,
> >array_of_statuses) has the same effect as the execution of
> >MPI_WAIT(&array_of_request[i], &array_of_statuses[i]), for i=0 ,...,
> >count-1, in some arbitrary order.
> >
> >What about count==0?
> >
> >1) Is it allowed?
> >
> >2) If it is allowed, should a valid pointer be provided as the
> >array_of_requests argument?
> >
> >Looking at mpich-1.0.6p1, I see the following:
> >
> >
> >Function Allows count==0 Allows bufptr=0 when count==0
> >MPI_Testall Y Y
> >MPI_Testsome Y Y
> >MPI_Testany Y Y
> >MPI_Waitsome Y Y
> >MPI_Waitall Y Y
> >MPI_Waitany Y Y
> >MPI_Startall Y N
> >
> >For MPI_Startall:
> >
> >Fatal error in MPI_Startall: Invalid argument, error stack:
> >MPI_Startall(147): MPI_Startall(count=0, req_array=(nil)) failed
> >MPI_Startall(80).: Null pointer in parameter array_of_requests[0]0:
> Return code = 0, signaled with Interrupt
> >
> >Considering the other ...all functions allow count==0 and ignore the
array
> >in this case, the behaviour of MPI_Startall is probably just an
oversight
> >in mpich.
> >
> >For some reason, OpenMPI has exactly the same behavior;
> >MPI_Startall doesn't allow req_array==0 even if count==0
> >(MPI_ERR_REQUEST is raised/returned)
> >
> >There is also an issue: progress. For example, in OpenMPI, calling
> >MPI_Testall with count==0 doesn't do anything at all. More specific, it
> >doesn't call the progress engine.
> >
> >Mpich however, does call the progress engine, even if count==0.
> >(On the other hand, MPI_Waitall does NOT try to make progress if
count==0)
> >
> >I propose to clarify where needed, and explicitly allow count==0;
> >Also, if count==0, then 0 should be accepted as pointer for the request
> >array, and no guarantee of progress should be made.
> >
> >If count==0, the returned error code should be MPI_SUCCESS (unless of
> >course, an asynchronous error is detected).
> >
> >It would suffice to add "When count is zero, this call has no effect."
> >(For MPI_Startall, MPI_Testall, MPI_Waitall this can be right after the
> >text quoted on top of this message; For MPI_Waitany, MPI_Testany,
> >MPI_Waitsome, MPI_Testsome, the addition should be at the end of the
> >paragraph.)
> >
> >
> > Greetings,
> > Dries
> >
> >_______________________________________________
> >mpi-forum mailing list
> >mpi-forum_at_[hidden]
> >http://lists.cs.uiuc.edu/mailman/listinfo/mpi-forum
>
>
>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> _______________________________________________
> mpi-21 mailing list
> mpi-21_at_[hidden]
> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-21/attachments/20080206/fa70f31d/attachment.html>
More information about the Mpi-21
mailing list