[mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C

Richard Treumann treumann at [hidden]
Tue Jan 29 11:40:19 CST 2008


Agreed - the migration and checkpoint/restart issues are already clear in
MPI 1.1

Dick Treumann  -  MPI Team/TCEM
IBM Systems & Technology Group
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363

mpi-21-bounces_at_[hidden] wrote on 01/28/2008 03:24:32 AM:

> Dick,
>
> your right and it is already decided:
> MPI 1.1 Sect. 7.1 page 193, lines 13-14:
>   "This routine returns the name of the processor on which it
>   was called at the moment of the call."
> And lines 22-25:
>   "Rationale. This function allows MPI implementations that do
>   process migration to return the current processor. Note that
>   nothing in MPI requires or defines process migration; this
>   definition of MPI GET PROCESSOR NAME simply allows such
>   an implementation. (End of rationale.)"
>
> I.e., current location, i.e., it may change in case of
> check point/restart and all the other reasons you mentioned.
>
> I would say, that the sentences above are clear enough.
>
> Okay?
>
> Best regards
> Rolf
>
>
> On Fri, 25 Jan 2008 15:32:07 -0500
>  Richard Treumann <treumann_at_[hidden]> wrote:
> >We also should decide whether every call to MPI_GET_PROCESSOR_NAME
across
> >the life of the task must return the same name.  On very large machines
> >running very large jobs, migration of some tasks off of failing nodes
and
> >on to robust nodes will become more interesting. Checkpoint/restart
raises
> >the same issue.  A restarted job will probably not have the same task to
> >node mapping.
> >
> >We can either require the name to remain constant and allow that it
might
> >be a "virtual" name or require that it return an "actual" name but allow
it
> >to change.
> >
> >               Dick
> >
> >Dick Treumann  -  MPI Team/TCEM
> >IBM Systems & Technology Group
> >Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> >Tele (845) 433-7846         Fax (845) 433-8363
> >
> >
> >mpi-21-bounces_at_[hidden] wrote on 01/25/2008 12:00:42 PM:
> >
> >> This is a discussion-point for MPI 2.1, Ballot 4.
> >>
> >> This is a follow up to:
> >>   MPI_GET_PROCESSOR_NAME and Fortran
> >>   in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> >> errata/index.html
> >> with mail discussion in
> >>   http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-
> >> errata/discuss/procname/
> >>
> >> _________________________________________________________________
> >>
> >> MPI_GET_PROCESSOR_NAME and Fortran
> >> and in C and all MPI_xxxx_GET_NAME routines
> >> -------------------------------------------
> >>
> >> Summary: Returning strings is defined in MPI_GET_PROCESSOR_NAME
> >> and MPI_xxxx_GET_NAME quite different. Not all implementations
> >> are doing the same with zero-filling. And what they do is
> >> at least with MPI_GET_PROCESSOR_NAME different to what
> >> the current standard requires. A propose to adapt the standard
> >> to the common reasonable implementations.
> >> The very short proposal for clarification can be found at the
> >> end of this text, see C. Proposal.
> >>
> >> A. MPI_GET_PROCESSOR_NAME
> ...
> >> B. MPI_COMM_GET_NAME (and other MPI_xxxx_GET_NAME)
> ...
> >> C. Proposal:
> >> ------------
> >>
> >> Add the following sentences to the current interface definitions:
> >> ------------------
> >> In C, a \0 is additionally stored at name[resultlen]. resultlen
> >> cannot be larger then MPI_MAX_PROCESSOR_NAME-1
> >> (or MPI_MAX_OBJECT_NAME-1). In Fortran, name(resultlen+1:)
> >> is filled with spaces. resultlen cannot be larger then
> >> MPI_MAX_PROCESSOR_NAME (or MPI_MAX_OBJECT_NAME).
> >> ------------------
> >>
> >> Typo correction:
> >> ----------------
> >> MPI-1.1 Sect. 7.1, page 193, beginning of line 29 reads
> >>    examine the ouput argument
> >> But should read (additional t in output)
> >>    examine the output argument
> >>
> >>
> >> Okay?
> >> _________________________________________________________________
> >>
> >> Best regards
> >> Rolf
> >>
> >> PS: Attached my tests and short protocols
> >>
> >>
> >>
> >> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
> >> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> >> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> >> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> >> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> >> [attachment "mpi_get_xxx_name.tar.gz" deleted by Richard
> >> Treumann/Poughkeepsie/IBM]
> >_______________________________________________
> >> mpi-21 mailing list
> >> mpi-21_at_[hidden]
> >> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21
>
>
>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> _______________________________________________
> mpi-21 mailing list
> mpi-21_at_[hidden]
> http://lists.cs.uiuc.edu/mailman/listinfo/mpi-21





* 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-21/attachments/20080129/8b541fdf/attachment.html>


More information about the Mpi-21 mailing list