[mpi-21] MPI_GET_PROCESSOR_NAME Fortran and C
Rolf Rabenseifner
rabenseifner at [hidden]
Fri Jan 25 11:00:42 CST 2008
This is a discussion-point for MPI 2.1, Ballot 4.
This is a follow up to:
MPI_GET_PROCESSOR_NAME and Fortran
in http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/index.html
with mail discussion in
http://www.cs.uiuc.edu/homes/wgropp/projects/parallel/MPI/mpi-errata/discuss/procname/
_________________________________________________________________
MPI_GET_PROCESSOR_NAME and Fortran
and in C and all MPI_xxxx_GET_NAME routines
-------------------------------------------
Summary: Returning strings is defined in MPI_GET_PROCESSOR_NAME
and MPI_xxxx_GET_NAME quite different. Not all implementations
are doing the same with zero-filling. And what they do is
at least with MPI_GET_PROCESSOR_NAME different to what
the current standard requires. A propose to adapt the standard
to the common reasonable implementations.
The very short proposal for clarification can be found at the
end of this text, see C. Proposal.
A. MPI_GET_PROCESSOR_NAME
-------------------------
MPI_GET_PROCESSOR_NAME defines the returned string with several
sentences:
(1) OUT name A unique specifier for the actual
(as opposed to virtual) node.
(2) OUT resultlen Length (in printable characters)
of the result returned in name
(3) The argument name must represent storage that is at least
MPI_MAX_PROCESSOR_NAME characters long.
(4) MPI_GET_PROCESSOR_NAME may write up to this many characters
into name.
(5) The number of characters actually written is returned
in the output argument, resultlen.
(6) The user must provide at least MPI_MAX_PROCESSOR_NAME
space to write the processor name processor names
can be this long.
(7) The user should examine the ouput argument, resultlen,
to determine the actual length of the name.
I tested 5 implementations with C and Fortran.
I called MPI_GET_PROCESSOR_NAME with a string (i.e. character
array) with size MPI_MAX_PROCESSOR_NAME+2.
C-Interface:
------------
All tested C implementations returned the processor-name
in name[0..resultlen-1] and a the non-printable character
\0 in name[resultlen].
All other elements of name were unchanged.
(1,2,3,4, 6,7) are fulfilled;
(5) are __NOT__ fulfilled, because resultlen+1 characters
are written in name.
My opinion: The returned name and resultlen is what the user
expects, but the standard needs a clarification.
Fortran-Interface:
------------------
All tested Fortran implementations return in processor-name
in name(1:resultlen) and the rest of the total string is
filled with spaces.
(1, 3, 6,7) are fulfilled;
(2,4,5) are __NOT__ fulfilled, because
MPI_MAX_PROCESSOR_NAME+2 characters are written in name.
My opinion: The returned name and resultlen is what the user
expects, but the standard needs a clarification.
B. MPI_COMM_GET_NAME (and other MPI_xxxx_GET_NAME)
--------------------------------------------------
The string output is defined with different wording:
(1) OUT comm_name the name previously stored on the
communicator, or an empty string if no
such name exists (string)
(2) OUT resultlen length of returned name (integer)
(3) name should be allocated so that it can hold a resulting
string of length MPI_MAX_OBJECT_NAME characters.
(4) If the user has not associated a name with a communicator,
or an error occurs, MPI_COMM_GET_NAME will return an empty
string (all spaces in Fortran, "" in C and C++).
and in the definition of MPI_COMM_SET_NAME:
(5) The length of the name which can be stored is limited
to the value of MPI_MAX_OBJECT_NAME in Fortran and
MPI_MAX_OBJECT_NAME-1 in C and C++ to allow for the null
terminator.
(6) Attempts to put names longer than this will result in
truncation of the name.
(7) MPI_MAX_OBJECT_NAME must have a value of at least 64.
I called MPI_COMM_GET_NAME with a string (i.e. character
array) with size MPI_MAX_OBJECT_NAME+2.
C-Interface:
------------
All tested C implementations returned the communicator-name
in comm_name[0..resultlen-1] and a the non-printable character
\0 in comm_name[resultlen].
One implementation filled up the rest until
name[MPI_MAX_OBJECT_NAME-1] with \0.
In all other implementations, all other elements of comm_name
were unchanged.
(1-7) are fulfilled although the retuned zero-filling in comm_name
depends on the implementations;
My opinion: A clarification can make the API unambiguous.
Fortran-Interface:
------------------
All tested Fortran implementations return in processor-name
in name(1:resultlen) and the rest of the total string is
filled with spaces.
(1-7) are fulfilled;
Although it is nowhere specified that the string must be filled
up with spaces, and not only until position MPI_MAX_OBJECT_NAME
but also further spaces until the end of comm_name.
My opinion: The returned name and resultlen is what the user
expects, but the standard needs a clarification.
C. Proposal:
------------
Add the following sentences to the current interface definitions:
------------------
In C, a \0 is additionally stored at name[resultlen]. resultlen
cannot be larger then MPI_MAX_PROCESSOR_NAME-1
(or MPI_MAX_OBJECT_NAME-1). In Fortran, name(resultlen+1:)
is filled with spaces. resultlen cannot be larger then
MPI_MAX_PROCESSOR_NAME (or MPI_MAX_OBJECT_NAME).
------------------
Typo correction:
----------------
MPI-1.1 Sect. 7.1, page 193, beginning of line 29 reads
examine the ouput argument
But should read (additional t in output)
examine the output argument
Okay?
_________________________________________________________________
Best regards
Rolf
PS: Attached my tests and short protocols
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden]
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
*
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mpi_get_xxx_name.tar.gz
Type: application/x-gzip
Size: 2880 bytes
Desc: mpi_get_xxx_name.tar.gz
URL: <http://lists.mpi-forum.org/pipermail/mpi-21/attachments/20080125/1e9d8fe3/attachment.bin>
More information about the Mpi-21
mailing list