[MPI3 Fortran] Proposed solution for the volatile buffer problem

N.M. Maclaren nmm1 at cam.ac.uk
Wed Jan 14 13:36:07 CST 2009


Bill Long wrote:
>
> > Firstly, MPI non-blocking buffers have ASYNCHRONOUS semantics, and so 
> > can be updated at any time between the isend/irecv and the wait. In 
> > particular, many programs put wrappers around the MPI calls (often to 
> > enable RDMA transports as an option), and so the buffer variable may 
> > not be visible in the procedure that calls the wait.
> 
> Agreed.  This is a good argument in favor of the proposal.  It's not
> trying to guess which variables might or might not be affected, but
> rather focuses on preventing motion of ANY code across a call to a
> volatile subroutine.  That's the ultimate goal, and this is one way to
> put that requirement into the standard.

Unfortunately not.  Consider (fairly common) logic like the following:

    SUBROUTINE Process
        REAL :: array(100,100,100), ...
        REAL :: buffer(100,100,100)
        DO WHILE ...
            ! A lot of complex, expensive processing that operates on
            ! various parts of array buffer, including calls to MyIsend,
            ! MyIrecv, MyWait and Transfer such as:
            CALL MyIsend(buffer,...,imsg)
            CALL MyIrecv(buffer,...,imsg)
            Call MyWait(imsg)
            CALL Transfer(buffer,...)
            ! The calls would not be simply in that order, of course,
            ! and there could be lots of them.  The buffer arguments might
            ! also be sections, array elements (possibly corresponding to
            ! array dummies) and so on.
        END DO
    END SUBROUTINE Process

    SUBROUTINE MyIsend(buffer, imsg)
        REAL :: buffer(:,:,:)
        INTEGER :: imsg    ! The MPI request
        CALL MPI_Isend(buffer,...,imsg,...)
    END SUBROUTINE MySend

    ! Similarly for MyIrecv and MPI_Irecv

    SUBROUTINE MyWait(imsg)
        INTEGER :: imsg    ! The MPI request
        CALL MPI_Wait(imsg,...)
    END SUBROUTINE MyWait

Now, which of those subroutines should be declared as VOLATILE?  Doing
so for MPI_Isend, MPI_Irecv and MPI_Wait isn't enough, as the compiler
needs to know that array buffer may be updated asynchronously during
the execution of Process.  But, if MyIsend, MyIrecv and MyWait are,
then ALL variables in Process are - which means that Process is almost
unoptimisable.  And Process is where the CPU time goes.

To state that people shouldn't write code like that is unreasonable;
it is a perfectly rational design, and natural for many tasks.

> > Thirdly, it is horribly unstructured and leads to catastrophic
> > inefficiency
> > if extended to allow asynchronous update (as would be needed), because
> > all
> > visible variables would be effectively VOLATILE.
> 
> On this you've entirely missed the point.  Making all variables volatile
> in the scope would, indeed, be a major performance problem.  By making
> the subroutine volatile instead, the pseudo-volatility of the variables
> is isolated to just the one CALL statement, and does not affect
> performance elsewhere. As stated before, there is some potential
> performance hit at the call site itself, but any register reloads after
> the call would likely be coming from cache so the actual hit might be
> small.

Sorry, but THAT misses the point!  If the volatility applies just over
the actual call, then even flagging all of MyIsend, MyIrecv, MyWait,
MPI_Isend, MPI_Irecv and MPI_Wait won't help.  Remember that the update
can occur at ANY time between starting the transfer and waiting for its
completion, including when control is inside Transfer.

So, if the logic is in any way complicated, the compiler has to treat
all variables as liable to be updated asynchronously at any time - and
that is equivalent to flagged them as as VOLATILE, with consequent loss
of efficiency.



Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679





More information about the mpiwg-fortran mailing list