[MPI3 Fortran] Results of San Jose Forum meeting
Jeff Squyres
jsquyres at cisco.com
Wed Mar 10 21:04:13 CST 2010
We had a great face-to-face meeting here at the MPI Forum in San Jose
yesterday. Rolf came with prepared with a boatload of issues to
discuss and has convinced us that his vision is a good one. He
changed my mind on several items with his solid, well-formed
arguments; here's a summary...
1. Deprecate mpif.h (ok, this one is not a change, but there are some
important 2nd-order effect changes, below). We all know the
problems with mpif.h; I won't re-hash them here. Maybe we'll
remove mpif.h from the MPI spec in 10 years or so.
2. Do NOT deprecate "use mpi". This is a reversal for me; but Rolf
had excellent logical sequence of steps that lead to a technically
sound, relatively easy upgrade path for mpif.h users:
- Make the explicit interfaces in "use mpi" be *required* by MPI
implementations (they are optional right now). This is now
possible because of the (void*)-like behavior that is available
in all Fortran compilers.
- Since "use mpi" is not deprecated, it will include all MPI-3
functionality (e.g., non-blocking collectives).
- These two points have a very important consequence: aside from
re-certification, the upgrade path from mpif.h to "use mpi" is
very, very easy:
a. delete "include 'mpif.h'"
b. add "use mpi"
--> CORRECT MPI APPLICATIONS NEED NOT MAKE ANY OTHER CHANGES (!)
Rolf assures me that all MPI behaviors will be exactly the
same as they were with mpif.h. I can't think of a case where
that is wrong.
- Additionally, after upgrading to "use mpi":
- All MPI-3 functionality is now available
- Asynchronous buffer issues are solved
- The compelling argument for why *not* to deprecate "use mpi" is
that there are companies that *do not allow* the use of
deprecated functionality in their codes (I was unaware of this).
Hence, if we deprecate mpif.h, they *have* to upgrade their
(potentially very, very large) legacy applications. If we also
deprecated "use mpi", the upgrade path is fairly difficult
because INTEGER handles will have to be converted to explicit MPI
handle types. But if we keep "use mpi" (i.e., extend
implementations with (void*)-like behavior -- as some
implementations have already done), the upgrade path is quite
clear and not nearly as painful. And other MPI-3 functionality
also comes within easy reach.
- The key insight here is that "use mpi" and "use mpi3" are exactly
the same except for one thing: MPI handles are INTEGERS in "use
mpi", and they are derived datatypes in "use mpi3". *Everything
else is the same*. (think about that)
- This means that there will be one new minor feature to "use
mpi": ierror will become an optional argument. All correct
codes will still compile -- but "use mpi" codes will now *also*
be able to leave out ierror.
3. The derived types for MPI handles in "use mpi3" will all be of the form:
type MPI_Comm
sequence
INTEGER :: val
end type MPI_Comm
- This allows the conversion from mpif.h and "use mpi" INTEGER
handles to "use mpi3" typed handles to be simple assignment --
there is no need for conversion functions. Conversion to C
handles is then already handled by the existing MPI F2C and C2F
functions.
- This also allows some potentially very nice implementation tricks
and optimizations that Craig and I will pursue in the prototype;
more information on this later...
FWIW: The previous plan was to allow implementations to implement
the handle as whatever they wanted. Indeed, the OMPI
prototype currently has the Fortran handle derived type
contain the C handle value (which, in Open MPI, is a
pointer). But Rolf convinced us that standardizing on a
derived type containing a single INTEGER that is the same
value as the corresponding mpif.h handle is better.
4. We discussed using array subsections with MPI datatypes for quite a
while, and we *think* that it will work. The MPI implementation
will have to realize that it got an array descriptor as a buffer
and handle it properly (which could be a lot of work...). For
example, the following should work:
call MPI_Send(A(3:9:3, 4:8:2), any_mpi_datatype, ...)
- Craig and I have some ideas on how to implement this in the
prototype, but it will take some time and effort -- we'll see
what we discover when we try to implement it. It's somewhat
concerning, for example, that descriptors are not [yet]
standardized across different compilers (so Torsten tells me)...
- As a direct result, the text about restrictions on using
subarrays with non-blocking communication in the MPI language
bindings chapter will only be relevant to mpif.h -- "use mpi"
(and "use mpi3") codes will now be able to use array subsections.
Specifically, section "Problems Due to Data Copying and Sequence
Association", MPI 2.2 page:line 482:39 - 484:18.
- This also means that the PMPI layers for "use mpi" and "use mpi3"
will now need to understand descriptors, too. It is possible
that the back-ends of "use mpi" and "use mpi3" will not call the
corresponding C MPI API. So tools may need to actually have a
PMPI interface for the mpi/mpi3 modules.
--
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
More information about the mpiwg-fortran
mailing list