[MPI3 Fortran] Results of San Jose Forum meeting
nmm1 at cam.ac.uk
Thu Mar 11 07:28:49 CST 2010
> 2. Do NOT deprecate "use mpi". This is a reversal for me; but Rolf
> had excellent logical sequence of steps that lead to a technically
> sound, relatively easy upgrade path for mpif.h users:
So far, so good.
> - Make the explicit interfaces in "use mpi" be *required* by MPI
> implementations (they are optional right now). This is now
> possible because of the (void*)-like behavior that is available
> in all Fortran compilers.
Are you SURE? I am pretty sure that many don't, and won't do for a
Before taking such a drastic step, it would be a good idea to provide a
specific list of how it can be done for at least all of the actively
developed libraries. At least the following are either being actively
developed or are widely used:
IBM (several compilers)
> - Since "use mpi" is not deprecated, it will include all MPI-3
> functionality (e.g., non-blocking collectives).
That should be feasible, with some provisos, depending on the details,
which I have not seen.
> - These two points have a very important consequence: aside from
> re-certification, the upgrade path from mpif.h to "use mpi" is
> very, very easy:
I believe that to be true.
> - Additionally, after upgrading to "use mpi":
> - All MPI-3 functionality is now available
> - Asynchronous buffer issues are solved
That is completely off-beam. See later.
> - But if we keep "use mpi" (i.e., extend
> implementations with (void*)-like behavior -- as some
> implementations have already done), the upgrade path is quite
> clear and not nearly as painful. And other MPI-3 functionality
> also comes within easy reach.
Don't bet on it. The previous consensus to NOT upgrade "use mpi"
(except by adding FULLY upwards compatible extensions) makes sense,
but this doesn't.
> 3. The derived types for MPI handles in "use mpi3" will all be of the
> type MPI_Comm
> INTEGER :: val
> end type MPI_Comm
> - This allows the conversion from mpif.h and "use mpi" INTEGER
> handles to "use mpi3" typed handles to be simple assignment --
> there is no need for conversion functions. Conversion to C
> handles is then already handled by the existing MPI F2C and C2F
Yuck. You're not planning on playing tricks with EQUIVALENCE, are you?
If not, why a sequence type?
> - This also allows some potentially very nice implementation tricks
> and optimizations that Craig and I will pursue in the prototype;
> more information on this later...
And it locks out several others! Two that spring to mind are:
1) It prevents an implementation from using a pointer in the common
case where default integers are smaller than pointers, so requires a
2) Many compilers downgrade optimisation at the first sniff of
EQUIVALENCE, for very good reasons.
> 4. We discussed using array subsections with MPI datatypes for quite a
> while, and we *think* that it will work. The MPI implementation
> will have to realize that it got an array descriptor as a buffer
> and handle it properly (which could be a lot of work...). For
> example, the following should work:
> call MPI_Send(A(3:9:3, 4:8:2), any_mpi_datatype, ...)
Sorry, but it won't, as I keep saying, unless you are making
incompatible changes to MPI that will affect a very large number of
programs. In particular, it will be no longer possible to pass an
assumed-size array or an array element. To see this, try:
REAL :: X(10,10)
SUBROUTINE Joe (X)
REAL :: X(*)
END SUBROUTINE Joe
SUBROUTINE Fred (X)
REAL :: X(:)
END SUBROUTINE Fred
END PROGRAM Main
It also locks out a few obscurer things that are currently legal, such
as an actual argument of MPI_Isend that is an array with a vector
subscript, but I don't think they matter.
> - Craig and I have some ideas on how to implement this in the
> prototype, but it will take some time and effort -- we'll see
> what we discover when we try to implement it. It's somewhat
> concerning, for example, that descriptors are not [yet]
> standardized across different compilers (so Torsten tells me)...
And they are likely to change, as the TR does.
> - I'm not sure why the async problem is considered solved. As far as I
> understood Nick, for this to happen, the asynchronous attribute should
> be added to all affected buffers throughout the whole application code,
> at all levels. This might have performance implications. If this change
> is indeed necessary, can it be considered small? Please clarify.
It's solved at the MPI Fortran binding level. If you don't use it
at all levels, you are no worse off than at present.
> - The change in the module implementation without the name change may
> cause extra confusion, as the customers will have to add the right mpi
> module (MPI-2 or MPI-3) to the right (MPI-2 or MPI-3) library. Actually,
> the possible presence of more than one MPI library is a major support and
> backend hassle. If this concern is material, can this be avoided?
Most vendors solved that one years ago, and some did decades ago.
I agree that it will be a problem for the users of clueless vendors.
More information about the mpiwg-fortran