[MPI3 Fortran] MPI non-blocking transfers
N.M. Maclaren
nmm1 at cam.ac.uk
Wed Jan 21 15:07:55 CST 2009
> > 1) Most people seem to agree that the semantics of the buffers used
> > for MPI non-blocking transfers and pending input/output storage
> > affectors are essentially identical, with READ, WRITE and WAIT
> > corresponding to MPI_Isend, MPI_IRecv and MPI_Wait (and variations).
> >
> > Do you agree with this and, if not, why not?
>
> Almost the case. One key difference is that a variable in the io list
> of an asynchronous read or write automatically acquires the asynchronous
> attribute, even if it is not declared that way. If we want a similar
> capability for the MPI case, then the attribute name would have to be
> different from ASYNCHRONOUS. That is probably a good idea anyway to
> avoid undesirable interaction with actual asynchronous variables in
> Fortran. Having the buffer automatically acquire this new attribute is
> necessary if we want to avoid requiring changes to existing codes.
That is precisely the sort of response I was hoping for! It raises
real issues that we can nail down.
I personally agree with Aleksandar and Van, but that isn't the point.
> > 2) Most people seem to agree that a data attribute is essential, and
> > a purely procedure-based solution will not work.
> >
> > Do you agree with this and, if not, why not?
>
> 1) Preventing code motion by the compiler across a call to an
> MPI_wait-like subroutine. This is needed to avoid the possibility that
> user-coded modifications to the buffer (which is not present as an
> argument in the call) do not migrate to before the call. A subroutine
> prefix keyword simply and completely solves this problem. I recently
> sent out such a proposal, with the prefix spelled VOLATILE. This
> solution has the desirable feature that no change to the user's source
> code is needed as long as they already have a 'use mpi' statement, or a
> compiler switch to cause the same effect.
A good point. I see what you are getting at. It has the disadvantage
that it applies to ALL variables, which is pretty serious, so it would
be preferable to spell it ASYNCHRONOUS (or whatever) and have it apply
only to those.
> 2) Preventing modifications to the buffer between the call to the
> transfer initiation subroutine and the corresponding wait subroutine.
> These can come in two forms: The user explicitly codes such
> modifications, which is the user's fault and not something we should try
> to solve. I would note that, in Fortran, this can be not obvious to a
> novice programmer. For example, if the buffer were locally declared as
> allocatable without the save attribute, it will be deallocated at the
> end of the subroutine. If the wait routine is in a later-executed
> subroutine, this is a user error. The other form is the one we need to
> worry about: the internal generation of local temporary copies of the
> buffer by the compiler (so-called copy-in/copy-out for an actual
> argument). Various solutions to this problem have been discussed,
> most centering on an attribute for dummy arguments or coding versions of
> the MPI routines that accept pass-by-descriptor rather than
> pass-by-address for the buffer argument.
Again, thiat is a helpful response. Unfortunately, in THIS case, it is
seriously mistaken, and may be the cause of some of the cross-purposes
discussions. Things are much more complicated than that.
Firstly, it can happen in plain code, and at least some compilers do (or
used to). For example, consider vectorisable loops on possibly
non-contiguous or vector indexed arrays. A compiler is perfectly
entitled to copy more than it needs to contiguous workspace and copy
both the updated and untouched locations back, if the array is not
marked ASYNCHRONOUS and is not otherwise used. The reason is typically
alignment (as in many vector systems, SSE etc.)
Secondly, merely passing an argument does not count as a modification,
even if it passes an array section to an assumed-size dummy (which
forces copy-in/copy-out). This can apply both if the call is an
extraneous one, and when the call is an intermediate level between
where the buffer is defined and the MPI call.
So merely defining dummy arguments will not fly, and nor will mere
versions of MPI routines. Sorry.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1 at cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679
More information about the mpiwg-fortran
mailing list