[MPIWG Fortran] Provide Fortran datatypes if Fortran bindings not provided?

Jeff Hammond jeff.science at gmail.com
Thu Feb 18 11:55:20 CST 2016


 From MPI 3.1, page 674:

"If an accompanying C++ compiler is missing, then the MPI datatypes in this
table are not defined."

Please tell me why we want to do something different for Fortran.

While we do not currently say "If an accompanying Fortran compiler is
missing, then the MPI datatypes in this table are not defined," as a
footnote to the appropriate table, that is not a good reason to define
something brittle.  Let's just add the footnote and call it good.

Users that expect Fortran features without a Fortran compiler are like
people who try to withdraw money from a bank where they do not have an
account.  Both types of people are criminals :-)

Jeff

On Thu, Feb 18, 2016 at 7:10 AM, Jeff Hammond <jeff.science at gmail.com>
wrote:
>
> The only reasonable thing to do here is to include MPI_INTEGER and other
Fortran datatype enumerations in the meaning of Fortran bindings and not
define them in the absence of a Fortran compiler.  Any other solution is
unreliable, ill-defined and going to lead to sadness.
>
> More below.
>
> On Thu, Feb 18, 2016 at 3:45 AM, Jeff Squyres (jsquyres) <
jsquyres at cisco.com> wrote:
> >
> > A question has come up recently in the Open MPI community (being
debated in a lengthy thread here
https://www.open-mpi.org/community/lists/users/2016/02/28448.php):
> >
> > If Fortran support is not provided in an Open MPI installation (e.g.,
if Open MPI's configure script detects that there is no Fortran compiler,
and therefore none of the mpif.h, mpi module, or mpi_f08 module are
compiled), should Fortran datatypes (e.g., MPI_INTEGER) be available in the
C bindings?
> >
> > It was pointed out that MPI-3.1 says that the Fortran bindings are
optional -- but datatypes are not currently marked as optional.
> >
> > Hence, even if you don't have to Fortran compiler, you still have to
have declarations for MPI_INTEGER (and friends) in mpi.h.
> >
> > What do people think about this?
> >
> > ARGUMENTS FOR KEEPING IT THE SAME
> > =================================
> >
> > A1. MPI can set MPI_INTEGER to be equivalent to MPI_DATATYPE_NULL.
>
> I will repeat what I said on the Open-MPI list.  THIS IS HORRIBLE.  It
delays the failure until runtime.  There is absolutely no value whatsoever
in allowing a successful build of an application that is going to crash and
burn the moment it touches this datatype, particularly in light of our
discussion about a month ago regarding when MPI_DATATYPE_NULL can be used
(basically never).
>
> I do not want MPI implementations to lie to build systems about the
availability of MPI_INTEGER in C when it is not actually a functional
feature.
>
> Additional reasons: Jed Brown.  PETSc.  <<< This is here for Jed's email
filters.
>
> > A2. MPI could probably figure out what Fortran INTEGER would have been
(i.e., probably equivalent to C int) and just set MPI_INTEGER to be that.
>
> This is also horrible, because Fortran INTEGER does not have fixed
width.  Here is what ISO Fortran 2008 says about INTEGER:
>
> "The processor shall provide at least one representation method with a
decimal exponent range greater than or equal to 18."
>
> Please explain the unambiguous mapping between this definition and the
definition of ISO C int.
>
> >
> > A3. The whole point of having Fortran datatypes available in C is that
even an MPI process written in C can receive/send data from MPI processes
in Fortran.  Hence, *this* MPI process -- compiled by an MPI implementation
that does not have a Fortran compiler -- may not have Fortran support, but
a peer MPI process in the same MPI job may have Fortran support.
>
> Such a process cannot send or receive Fortran INTEGER data because it has
no idea the size of such data.
>
> This sort of use case is best supported by sending and receiving the data
as bytes, because the no-Fortran process can understand that.
>
> Alternatively, the Fortran processes can use ISO_C_BINDING and convert to
C datatypes before sending.  Since we have standardized the use of Fortran
2008 in MPI-3, there is absolutely no reason why this should not be the
recommended practice, because it is completely reliable and well-defined.
>
> > ARGUMENTS FOR A CHANGE
> > ======================
> >
> > B1. Setting MPI_INTEGER (and friends) to MPI_DATATYPE_NULL -- which
will cause a run-time MPI exception -- is somewhat anti-social behavior
(and potentially confusing to the user).
>
> See above.
>
> > B2. An MPI implementation can *assume* but can't *know* what the
size/representation of Fortran datatypes are unless there's a Fortran
compiler with which to test.
>
> You know what happens when you assume, right?
>
> What possible value does it serve to guess?  We create a situation where
sometimes it works and sometimes MPI programs crash and burn?  While we are
at it, let's add MPI_Barrier_sometimes(double randomness_factor)...
>
> > B3. A3 is a somewhat sketchy claim.  It's obviously possible and valid,
but fairly uncommon to have multiple MPI implementation installations
involved in a single execution of an MPI application.
>
> It is valid to have multiple implementations, but not valid to expect
processes that use a Fortran-oblivious MPI library to understand Fortran
data.
>
> Jeff
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>



--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20160218/f25dea57/attachment-0001.html>


More information about the mpiwg-fortran mailing list