[MPI3 Fortran] Deprecate mpif.h?

Rolf Rabenseifner rabenseifner at hlrs.de
Thu Mar 4 08:28:20 CST 2010


First I want to summarize, then I have a few questions:

The new "use mpi3" includes all MPI routines.
Handles are specified with special types.

Ierror argument can be omitted.

There are absolutely no changes for the meaning of
datatypes to buffer arguments between a MPI library that
uses implicit (f77) interfaces and explicit (f90) interfaces.

>From MPI specification view, there should not be any
difference between MPI libraries using explicit 
interfaces with some vendor specific "void" mechanism 
for buffer arguments, and the new Fortran 2008
TYPE(*) mechanism.

Inside the MPI library there may be significant differences,
because with TYPE(*)
 - function overloading must be used to allow 
    -- non-array, and
    -- array arguments,
 - the called C backend routine must handle array descriptors
   instead of only a reference to the beginning of the array.

Based on this, the section MPI-2.2 page 482:39-484:18
"Problems due to Data Copying and Sequence Association"
is solved an can be removed.
 
"use mpi3" must be done before "implicit none".

Current "use mpi" (also before "implicit none")
includes in some libraries already subroutine call argument checking.

Current "include mpif.h" must be done after "implicit none"
and therefore cannot use modules. Nevertheless, explicit
subroutine specifications are also allowed in mpif.h.

It seems that MPI libraries with fortran argument checking
are doing this only with "use mpi" and not with "include mpif.h".

Interoperability is given for sources with some routines 
using 
 - include mpif.h, and others using
 - include mpi, and some also
 - include mpi3

Am I right? Or do I misunderstand something?

--------------------------
Question on handle arguments:

 - An application has produced an INTEGER COMM handle,
   i.e., a Cartesian communicator. This COMM is used in
   many routines together with "use mpi" or "include mpif.h"
   and one does not want to modify these routines.
   COMM is globally stored in an application module variable.
   In a new routine, one wants to CALL MPI_IBCAST(...COMM).
   
   How must this be programmed? 
   Will it cause a warning? Or an error at compile time?  

 - dt is generated with 
   MPI_TYPE_VECTOR(1,1,3,MPI_REAL,dt,ierror)
   
   Now we call
   REAL, DIMENSION(0:23) :: buf    
   CALL MPI_IRECV(buf(0:23:2),6,dt,....) 
   i.e., only with the elements 0,2,4,8,...,22.

   When I understand correctly, then data is received into
   buf(0), buf(6), buf(12), and buf(18), i.e., into buf(0:23:6)

 - When I understand correctly, then with "use mpi3",
   it is guaranteed that this example works,
   while with MPI-2.2, this example is corrupted because
   of the possibility that MPI_IRECV is called with a 
   contiguous scratch buffer, containing values of buf(0), buf(2),... 

 - Now we call
   REAL, DIMENSION(0:23) :: buf    
   CALL MPI_SEND(buf(0:23:2),6,dt,....) 

   Do I understand correctly, that this example is portable
   and sends buf(0), buf(6), buf(12), and buf(18),
   independent of usage of
    -- "include mpif.h"
    -- "use mpi" with argument checking explicit interfaces,
       and "void hack", e.g., from IBM or NEC
    -- "use mpi3"
   Do I understand correctly, that internally, 
    -- with implicit interfaces, data is copied into 
       a contiguous scratch buffer scr(0:4) and 
       C MPI_Send(scr) is called;
    -- with explicit interface and "void hack",
       the buffer argument is not checked and internally
       handled identical to the case of implicit interfaces
    -- with explicit interface and TYPE(*) buf,
       the MPI library cannot call the C interface because
       an array descriptor is handed over.
       Internally, it is forbidden to copy the data
       into a scratch buffer, because with copying,
       the non-blocking routines would not work.

Kind regards
Rolf

----- "N.M. Maclaren" <nmm1 at cam.ac.uk> wrote:

> On Mar 4 2010, Hubert Ritzdorf wrote:
> 
> > I was thinking for the specific object files which might be
> generated 
> > together with the MPI module when compiling the MPI module. These
> object 
> > files have to be additionally specified within the link step, if the
> 
> > compiler generates such object files for the specific mpi module.
> 
> That's almost trivial to solve, and has been solved many times
> before.
> A single library contains both sets of object files, and the compiled
> code contains a reference to what it needs.
> 
> All the implementor needs to do is to ensure that there are no name
> clashes, and the object files will interoperate.
> 
> 
> Regards,
> Nick Maclaren.
> 
> 
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)



More information about the mpiwg-fortran mailing list