[MPI3 Fortran] Results of San Jose Forum meeting
Supalov, Alexander
alexander.supalov at intel.com
Thu Mar 11 05:48:46 CST 2010
Hi everybody,
Thanks. The intention to extend the mpi module looks positive to me. I still have a few points that may need further clarification:
- I'm not sure why the async problem is considered solved. As far as I understood Nick, for this to happen, the asynchronous attribute should be added to all affected buffers throughout the whole application code, at all levels. This might have performance implications. If this change is indeed necessary, can it be considered small? Please clarify.
- The change of semantics of the mpi module compared to the current one (see the last passage about the need for the mpi module to understand the descriptors now) may require a recompilation once the module is changed. Also, the need to parse the descriptors, given that they may differ in structure and memory layout from compile to compiler, may cause need extra development and backend overhead for the mpi module. Has this been considered?
- The change in the module implementation without the name change may cause extra confusion, as the customers will have to add the right mpi module (MPI-2 or MPI-3) to the right (MPI-2 or MPI-3) library. Actually, the possible presence of more than one MPI library is a major support and backend hassle. If this concern is material, can this be avoided?
- The remaining intention to deprecate the mpif.h seems counterproductive from the customer point of view. As some apps won't be allowed to use a deprecated features, we seem to drive them into "non-compliance" wrt MPI-3, hoping that they will switch to the MPI module due to this. I'm afraid many will bang on the MPI-3 instead. Has this been foreseen and calculated?
It would be great to have these points addressed going forward as I may have missed some details due not being on the floor this time.
Best regards.
Alexander
-----Original Message-----
From: mpi3-fortran-bounces at lists.mpi-forum.org [mailto:mpi3-fortran-bounces at lists.mpi-forum.org] On Behalf Of Jeff Squyres
Sent: Thursday, March 11, 2010 4:04 AM
To: MPI-3 Fortran WG
Subject: [MPI3 Fortran] Results of San Jose Forum meeting
We had a great face-to-face meeting here at the MPI Forum in San Jose
yesterday. Rolf came with prepared with a boatload of issues to
discuss and has convinced us that his vision is a good one. He
changed my mind on several items with his solid, well-formed
arguments; here's a summary...
1. Deprecate mpif.h (ok, this one is not a change, but there are some
important 2nd-order effect changes, below). We all know the
problems with mpif.h; I won't re-hash them here. Maybe we'll
remove mpif.h from the MPI spec in 10 years or so.
2. Do NOT deprecate "use mpi". This is a reversal for me; but Rolf
had excellent logical sequence of steps that lead to a technically
sound, relatively easy upgrade path for mpif.h users:
- Make the explicit interfaces in "use mpi" be *required* by MPI
implementations (they are optional right now). This is now
possible because of the (void*)-like behavior that is available
in all Fortran compilers.
- Since "use mpi" is not deprecated, it will include all MPI-3
functionality (e.g., non-blocking collectives).
- These two points have a very important consequence: aside from
re-certification, the upgrade path from mpif.h to "use mpi" is
very, very easy:
a. delete "include 'mpif.h'"
b. add "use mpi"
--> CORRECT MPI APPLICATIONS NEED NOT MAKE ANY OTHER CHANGES (!)
Rolf assures me that all MPI behaviors will be exactly the
same as they were with mpif.h. I can't think of a case where
that is wrong.
- Additionally, after upgrading to "use mpi":
- All MPI-3 functionality is now available
- Asynchronous buffer issues are solved
- The compelling argument for why *not* to deprecate "use mpi" is
that there are companies that *do not allow* the use of
deprecated functionality in their codes (I was unaware of this).
Hence, if we deprecate mpif.h, they *have* to upgrade their
(potentially very, very large) legacy applications. If we also
deprecated "use mpi", the upgrade path is fairly difficult
because INTEGER handles will have to be converted to explicit MPI
handle types. But if we keep "use mpi" (i.e., extend
implementations with (void*)-like behavior -- as some
implementations have already done), the upgrade path is quite
clear and not nearly as painful. And other MPI-3 functionality
also comes within easy reach.
- The key insight here is that "use mpi" and "use mpi3" are exactly
the same except for one thing: MPI handles are INTEGERS in "use
mpi", and they are derived datatypes in "use mpi3". *Everything
else is the same*. (think about that)
- This means that there will be one new minor feature to "use
mpi": ierror will become an optional argument. All correct
codes will still compile -- but "use mpi" codes will now *also*
be able to leave out ierror.
3. The derived types for MPI handles in "use mpi3" will all be of the form:
type MPI_Comm
sequence
INTEGER :: val
end type MPI_Comm
- This allows the conversion from mpif.h and "use mpi" INTEGER
handles to "use mpi3" typed handles to be simple assignment --
there is no need for conversion functions. Conversion to C
handles is then already handled by the existing MPI F2C and C2F
functions.
- This also allows some potentially very nice implementation tricks
and optimizations that Craig and I will pursue in the prototype;
more information on this later...
FWIW: The previous plan was to allow implementations to implement
the handle as whatever they wanted. Indeed, the OMPI
prototype currently has the Fortran handle derived type
contain the C handle value (which, in Open MPI, is a
pointer). But Rolf convinced us that standardizing on a
derived type containing a single INTEGER that is the same
value as the corresponding mpif.h handle is better.
4. We discussed using array subsections with MPI datatypes for quite a
while, and we *think* that it will work. The MPI implementation
will have to realize that it got an array descriptor as a buffer
and handle it properly (which could be a lot of work...). For
example, the following should work:
call MPI_Send(A(3:9:3, 4:8:2), any_mpi_datatype, ...)
- Craig and I have some ideas on how to implement this in the
prototype, but it will take some time and effort -- we'll see
what we discover when we try to implement it. It's somewhat
concerning, for example, that descriptors are not [yet]
standardized across different compilers (so Torsten tells me)...
- As a direct result, the text about restrictions on using
subarrays with non-blocking communication in the MPI language
bindings chapter will only be relevant to mpif.h -- "use mpi"
(and "use mpi3") codes will now be able to use array subsections.
Specifically, section "Problems Due to Data Copying and Sequence
Association", MPI 2.2 page:line 482:39 - 484:18.
- This also means that the PMPI layers for "use mpi" and "use mpi3"
will now need to understand descriptors, too. It is possible
that the back-ends of "use mpi" and "use mpi3" will not call the
corresponding C MPI API. So tools may need to actually have a
PMPI interface for the mpi/mpi3 modules.
--
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
_______________________________________________
mpi3-fortran mailing list
mpi3-fortran at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
More information about the mpiwg-fortran
mailing list