[Mpi3-ft] Stabilization Updated & MPI_Comm_size question
Richard Treumann
treumann at us.ibm.com
Fri Sep 17 09:37:22 CDT 2010
If I used some mechanism that allows the MPI launch to give me fewer
processes than I asked for and it does give me less than I asked for, what
does this mean?
"number of ranks in the communicator regardless of whether they're
operational"
It is sort of weird to say that the processes I was never given are "in
the communicator". I can see the logic in counting a processes that have
died as "ranks in the communicator"
I would not count chickens that never hatched.
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
From:
"Bronevetsky, Greg" <bronevetsky1 at llnl.gov>
To:
"Bronis R. de Supinski" <bronis at llnl.gov>, "MPI 3.0 Fault Tolerance and
Dynamic Process Control working Group" <mpi3-ft at lists.mpi-forum.org>
Date:
09/17/2010 10:03 AM
Subject:
Re: [Mpi3-ft] Stabilization Updated & MPI_Comm_size question
Sent by:
mpi3-ft-bounces at lists.mpi-forum.org
I agree that MPI_Comm_size should return the number of ranks in the
communicator regardless of whether they're operational or not. However,
this just pushes the question further back: if the user asked for N
processes but only M have started up, how many ranks should MPI_COMM_WORLD
have? Either choice is going to be self-consistent from MPI's point of
view. If its N, then some ranks will be dead. If it is M then the
application may not have enough processes to work with. The former (N)
case has property that the user doesn't need to add code to check for this
condition since their existing error checking code will catch this
situation. The latter case (M) is nice because it cheaper to check whether
we got fewer processes than expected than to explicitly try to communicate
with them.
As such, I don't see a strong motivation for choosing either. However, we
should just pick one and stick with it to avoid unnecessary API
divergence.
Greg Bronevetsky
Lawrence Livermore National Lab
(925) 424-5756
bronevetsky at llnl.gov
http://greg.bronevetsky.com
> -----Original Message-----
> From: mpi3-ft-bounces at lists.mpi-forum.org [mailto:mpi3-ft-
> bounces at lists.mpi-forum.org] On Behalf Of Bronis R. de Supinski
> Sent: Friday, September 17, 2010 5:23 AM
> To: MPI 3.0 Fault Tolerance and Dynamic Process Control working Group
> Subject: Re: [Mpi3-ft] Stabilization Updated & MPI_Comm_size question
>
>
> I agree with Rich and Darius.
>
> On Fri, 17 Sep 2010, Darius Buntinas wrote:
>
> >
> > I don't think we should change the standard in this case. For
> MPI_Comm_size to have any useful meaning, it needs to return the size of
> the communicator: i.e., if comm_size returns N, you should be able to do
a
> send to processes 0 through N-1. Of course if some of those processes
have
> failed, you'll get an error associated with the process failure, but
never
> an error for an invalid rank.
> >
> > As discussed in the section about mpiexec, an implementation may
decide
> to provide a soft process count argument. So "mpiexec -n 10 -soft 5
./cpi"
> can start any number between 5 and 10 processes. But that does not
affect
> the meaning of the size of MPI_COMM_WORLD: regardless of the number of
> processes the implementation decides to start, MPI_Comm_size will return
> the _actual_ number of processes started.
> >
> > -d
> >
> > On Sep 17, 2010, at 11:22 AM, Graham, Richard L. wrote:
> >
> >> We need to clearly define what N or M is and not leave it to the
> implementation. 100% of the codes that seen over the past 15 years that
> check this value use it to indicate how many process have started. Any
> thing else is really useless, aside from letting the user find out how
many
> processes actually started up, and then know how many did not start up.
> >>
> >> Rich
> >>
> >>
> >> On 9/17/10 4:27 AM, "Josh Hursey" <jjhursey at open-mpi.org> wrote:
> >>
> >> So the Run-Through Stabilization proposal has been updated per our
> discussion in the working group meeting at the MPI Forum. The changes
are
> summarized below:
> >> - Add a Legacy Library Support example
> >> - Clarify new error classes
> >> - Update the MPI_Init and MPI_Finalize wording to be simpler and more
> direct.
> >> - Fix wording of group creation calls versus communicator creation
> calls.
> >>
> >> https://BLOCKEDBLOCKEDsvn.mpi-forum.org/trac/mpi-forum-
> web/wiki/ft/run_through_stabilization
> >>
> >>
> >> One question that we discussed quite a bit during the meeting was the
> issue of the return value of MPI_Comm_size() when processes fail during
> launch. I attempted to capture the discussion in the room in the Open
> Question attached to the discussion of MPI_Init:
> >> https://BLOCKEDBLOCKEDsvn.mpi-forum.org/trac/mpi-forum-
> web/wiki/ft/run_through_stabilization#MPI_INIT
> >>
> >> Open question:
> >> If the user asks to start N processes on the command line, and only M
> processes were successfully launched (where M < N), then what should be
> returned from MPI_COMM_SIZE?
> >>
> >> The return value must be consistent across all alive members of the
> group. The issue is if it should return N or M.
> >>
> >> The feeling in the room was that since the MPI standard does not
define
> the ability for the user to ask for a specific number of processes
before
> initthen it is hard to define that this is the number it should be.
> >>
> >> So it is left to the implementation whether it is M or N. If it is M,
> then the user has other techniques to find out what it originally asked
for
> (e.g., by passing it as a command line argument to the application
itself).
> >>
> >>
> >> What do people think about the MPI_Comm_size issue?
> >>
> >> -- Josh
> >>
> >> ------------------------------------
> >> Joshua Hursey
> >> Postdoctoral Research Associate
> >> Oak Ridge National Laboratory
> >> http://BLOCKEDBLOCKEDwww.BLOCKEDBLOCKEDcs.indiana.edu/~jjhursey
> >>
> >>
> >> _______________________________________________
> >> mpi3-ft mailing list
> >> mpi3-ft at lists.mpi-forum.org
> >> http://BLOCKEDBLOCKEDlists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
> >>
> >>
> >> _______________________________________________
> >> mpi3-ft mailing list
> >> mpi3-ft at lists.mpi-forum.org
> >> http://BLOCKEDBLOCKEDlists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
> >
> >
> > _______________________________________________
> > mpi3-ft mailing list
> > mpi3-ft at lists.mpi-forum.org
> > http://BLOCKEDBLOCKEDlists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
> >
> >
> _______________________________________________
> mpi3-ft mailing list
> mpi3-ft at lists.mpi-forum.org
> http://BLOCKEDlists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
_______________________________________________
mpi3-ft mailing list
mpi3-ft at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-ft/attachments/20100917/a4490cdd/attachment-0001.html>
More information about the mpiwg-ft
mailing list