[MPIWG Fortran] Webex meeting
longb at cray.com
Thu Dec 4 08:07:31 CST 2014
Perhaps I’m confused by terminology here, since everywhere else “shared memory” means SMP memory accessible locally through OpenMP or threaded language constructs, whereas memory on different ranks is “distributed memory”, accessed by an SPMD programming model. Since we’re talking about MPI, I’m assuming the interest is accessing memory on different ranks. This is really an issue of integrating MPI with Fortran, which is woefully out of date in the current MPI spec. Fortran has “images” which are basically like ranks (separate instances of the program execution, with the capability of referencing memory that is part of a different image). Fortran already has extensive facilities for synchronizing memory references and definitions on remote images, and rules for standard conforming programs that ensure safe accesses. The main difference with the MPI model is the MPI concept of a “window”, which seems unnecessarily complicated to a Fortran programmer. But I suspect it is somewhat similar to a coarray, in that the processor has to set up mechanisms to easily access coarrays on different images. This is handled automatically for Fortran images by the language - no manual intervention by the user. However, Fortran programmers also have ways to access remote memory that is not a coarray, so the analogy is not exact. The real question seems to be whether SYNC MEMORY statements, for example, force flushes of MPI RMA buffers, or whether a SYNC ALL statement has the effect of an MPI barrier. Since the MPI spec seems to lack any direction on how the library interacts with these parts of Fortran, we (Cray) tell customers to use MPI and regular Fortran remote accesses in separate “phases” of the program, where an MPI phase ends with an MPI barrier, and a Fortran phase ends with a SYNC ALL statement or the end of the program. That way any runtime-dependent clean-up is assured for both runtimes. Of course, one solution would be to implement the entire MPI library in coarray-C++, at which point there is only one runtime, eliminating the need for separate phases. But that seems like a pretty big change (although one with potential performance advantages). I think the real 800-pund gorilla in the room is the lack of a complete specification of how MPI and Fortran interact. That spec needs to be part of the MPI document.
Bill (the Fortran one)
On Dec 4, 2014, at 7:12 AM, William Gropp <wgropp at illinois.edu> wrote:
> This is not my position. My position remains that details of access/update to shared memory, including memory synchronization, *must* be handled correctly within the language. C11/C++11 appears to do this, thereby letting MPI+C or MPI+C++ programmers write standard conforming code with shared memory. Fortran is a little behind here, so users will have to depend on extensions (as they do for *other* parts of the MPI-Fortran interface). Including some memory fences in some of the MPI routines might help some Fortran programmers, but many of the issues with respect to correct, standard conforming programming with Fortran would remain.
> On Dec 4, 2014, at 3:45 AM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
>> because Bill Gropp would like to remove any Fortran Support
>> by substituting memory barriers (that are currently automatically
>> included into shared memory synchronizations, see MPI-3.0 410:15-19)
>> by C99 memory fences.
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
Bill Long longb at cray.com
Fortran Technical Suport & voice: 651-605-9024
Bioinformatics Software Development fax: 651-605-9142
Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101
More information about the mpiwg-fortran