[Mpi-forum] C++ types inaccessible after #281

Rolf Rabenseifner rabenseifner at hlrs.de
Thu Jun 28 01:28:26 CDT 2012


> The standard does not say much about datatypes that do not correspond
> to primitive datatypes of the host language. 

I'm not sure, what you mean with "host language".
MPI-2.2 p503:7-8 clearly states:
"All predefined datatypes can be used in datatype 
constructors in any language."

Tables on p516-518 define all MPI predefined datatype 
handles for C and Fortran types both in the MPI C, 
the MPI Fortran and the MPI C++ binding. 

There is obviously one exception: The middle table on page 517,
because we forgot to define the handles for the 4 C++ types
in the other language bindings (C, Fortran).

I would propose to fill the gap by
- defining in C and Fortran MPI_CPP_BOOL for C++ bool
- defining in C and Fortran MPI_CPP_COMPLEX,
  MPI_CPP_DOUBLE_COMPLEX, MPI_CPP_LONG_DOUBLE_COMPLEX,
  if the C/C++ standards do not require that 
  C "... _Complex" and C++ "Complex<...>" is internally 
  implemented in an identical way,
  and otherwise adding in the table on page 516
  - "C++ Complex<float>" after "float _Complex"
  - "C++ Complex<double>" after "double _Complex"
  - "C++ Complex<long double>" after "long double _Complex"
- The "new" datatypes MPI_CPP_BOOL, and perhaps MPI_CPP_COMPLEX,
  MPI_CPP_DOUBLE_COMPLEX, MPI_CPP_LONG_DOUBLE_COMPLEX,
  must be also mentioned in Table 13.2, on page 433.
  This was also overseen since MPI-2.0.
    
Based on the clear statement on MPI-2.2 p503:7-8
  "All predefined datatypes can be used in datatype 
  constructors in any language.,
I would say this should be a one vote MPI-2.2 errata.
As I mentioned in earlier emails,
a few MPI-2.2 errata are necessary as errata and not
only as a modification in MPI-3.0.
The should be also mentioned in the Change-Log.
These MPI-2.2 errata tickets are:
 - Ticket #171 (MPI_C_BOOL has external32 size 1 Byte)
 - Tickets #166 List-item 1, #192, #202
   because they are related to C++ which may be removed
   in MPI-3.0 due to #281 

To be decided by the C/C++ specialists:
 - Is MPI_CPP_..._COMPLEX needed or is MPI_C_..._COMPLEX
   enough for C++?
 - If MPI_CPP_..._COMPLEX is needed, what are the external 32
   sizes? 
   (I expect, it's external32 and therefore same as for C.) 
 - Which is the external32 size for MPI_CPP_BOOL?
   
To be decided by the whole Forum:
 - Do we make this MPI-2.2 errata to help the C++
   programmer using the MPI C language binding?

Rolf   

----- Original Message -----
> From: "Marc Snir" <snir at mcs.anl.gov>
> To: "Main MPI Forum mailing list" <mpi-forum at lists.mpi-forum.org>
> Sent: Wednesday, June 27, 2012 11:04:55 PM
> Subject: Re: [Mpi-forum] C++ types inaccessible after #281
> The standard does not say much about datatypes that do not correspond
> to primitive datatypes of the host language. I would suggest that is
> we define an MPI_Complex type for a language that that does not have a
> primitive complex, then the implementation will decide how it defines
> a complex (e.g., two successive doubles) and perform reductions
> accordingly. A good quality implementation will make C++/C interaction
> easier.
> 
> On Jun 27, 2012, at 3:10 PM, Jeff Squyres wrote:
> 
> > On Jun 27, 2012, at 3:59 PM, N.M. Maclaren wrote:
> >
> >> Yes, this is an ungodly mess. I don't agree with the forum's
> >> decision,
> >> but let's accept it. The only reasonable approach is the one of
> >> having
> >> MPI types for C++'s types, which could still be used via the C
> >> interface
> >> from a C++ compiler. I believe that is what you were referring to,
> >> and
> >> what is intended.
> >
> >
> > Thanks for the excellent summary.
> >
> > Don't forget a few points:
> >
> > - As Jim pointed out, all MPI datatypes are supposed to be available
> > in all language bindings. Hence, if we have (for example)
> > MPI_CXX_COMPLEX, it needs to be available in both C and the various
> > Fortran bindings.
> >
> > - This is not as wonky as it sounds; you may have a C++ MPI process
> > send an MPI_CXX_COMPLEX to a Fortran MPI process who simply receives
> > that message into a buffer and doesn't try to interpret it. The
> > Fortran MPI process could then send that same MPI_CXX_COMPLEX
> > process back to a C++ MPI process later, who could then rightfully
> > interpret it properly (this example can be made for any
> > language-specific type, actually).
> >
> > - From an MPI implementation's perspective, we don't have to worry
> > too much about the behavior and semantics of a given type -- we
> > mainly only have to worry about the byte layout in memory. Hence,
> > getting the size of a given type (and sometimes its alignment) is
> > all that is necessary for a C-based MPI implementation to be able to
> > properly pack/unpack/send/receive MPI_CXX_COMPLEX. (right?)
> >
> > - That being said, handling reductions on MPI_CXX_COMPLEX from
> > C-based MPI implementations may be a little problematic if the types
> > are not equivalent between C and C++, for example...
> >
> > --
> > Jeff Squyres
> > jsquyres at cisco.com
> > For corporate legal information go to:
> > http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> >
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum at lists.mpi-forum.org
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
> 
> 
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)



More information about the mpi-forum mailing list