[MPI3 Fortran] ASYNC attribute

Aleksandar Donev donev1 at llnl.gov
Wed May 6 16:31:20 CDT 2009


Rolf Rabenseifner wrote:

> - What is the difference between ASYNC_EXTERNAL and ASYNC?
One is reserved for Fortran I/O, the other for other stuff :-)

> - What are the rules of handing over
We are debating them...will update you later.

Aleks

>   1. an ASYNC_EXTERNAL real variable or ASYNC dummy argument
>      handed over to a subroutine with a dummy argument
>      defined ___with__ ASYNC
>   2. same but
>      defined _without_ ASYNC
>   3. a __normal__ variable or normal dummy argument handed over
>      to a subroutine with a dummy argument defined ___with__ ASYNC
> 
>   I would expect that all three cases should be allowed.
> 
> I would like to have the following detailed behavior:
>  
> - Copyin/out is prohibited in all 3 cases, i.e.,
>   if MPI_Irecv declares buf as ASYNC then it should prohibit
>   copyin/out when MPI_Irecv is called, and also if buf is
>   declared outside as ASYNC_EXTERNAL then also coyin/out
>   should be prohibited when buf is used as actual argument
>   in a function or subroutine call.
> 
> - The scope of ASYNC or ASYNC_EXTERNAL is not inherited by
>   called routines. The called routines must set ASYNC on dummy
>   arguments.
> 
> - As long as ASYNC or ASYNC_EXTERNAL is valid within the scope
>   of a block, all accesses to that variable must not be moved
>   around any subroutine or function call.
> 
> - The fortran compiler is allowed to modify ASYNC or ASYNC_EXTERNAL
>   variables or parts of them only if this is coded by the application.
> 
> - For accesses between two function or subroutine calls,
>   the compiler can cache ASYNC or ASYNC_EXTERNAL variables in
>   registers to optimize operations or repeated accesses.
>   (This should be a significant difference to VOLATILE!!!)
> 
> Reason for my wishes:
> - Each function or subroutine call can act as a synchronization point
>   where the application is informed that some data is stored or used
>   by asynchronously routing routines.
> 
> Implications:
> - In your example, one needs
>    SUBROUTINE UPDATE_INTERIOR(buf)
>    TYPE(*), DIMENSION(..), ASYNC   :: buf
>   This is needed to avoid, e.g., temporary aching, overwriting
>   and restoring of parts of the data for loop fusion.
> 
> - In your example, buf cannot be transfered with copyin/out
>   in the call to UPDATE_BOUNDARIES, because "ASYNC_EXTERNAL :: buf"
>   is already defined in the calling block.
> 
> Open question:
> - Do we want to have a method, to tell that a call acts as a
>   synchronization for one or several buffers?
>   And that other calls do not act as synchronization point.
>   I.e., do we want a method to change the default:
>     Default is, that any routine can act as synchronization point,
>     i.e., any (register) caching must be done between such points.
>     This means any BLAS or LAPACK routine is handled as a call
>     to MPI_WAIT.
> - I see two possibilities:
>   A) To change this default by some special syntax
>   B) Not to use this default at all.
>      Proposal:
>      Synchronizing routines must be declared as "SYNC_EXTERNAL"
> 
>      I.e.,
>      SUBROUTINE MPI_Wait(request, status, err) SYNC_EXTERNAL, BIND(C, 
> name="MPI_Wait")
> 
>      This means, that the portions of a variable, that are written
>      asynchronously and the portions that are written localy
>      of a ASYNC or ASYNC_EXTERNAL variable can be changed at
>      any call where the variable is an actual argument
>      or where the called routine has the SYNC_EXTERNAL flag.
> 
>      In the MPI API, we would give all WAIT, TEST and communication
>      routines this SYNC_EXTERNAL flag.
>      Reason:
>       -- WAIT and TEST: obvious.
>       -- communication routines can have invisible buffer arguments
>          due to the use of absolut addresses in MPI derived
>          datatypes and usage of MPI_BOTTOM
> 
> What do you think about this behavior?
> 
> Best regards
> Rolf
> 
> On Tue, 5 May 2009 17:45:00 -0600
>  Craig Rasmussen <crasmussen at newmexicoconsortium.org> wrote:
>> I'm at the WG5 Fortran standards meeting and I think we've made 
>> significant advances regarding the MPI-3 Fortran API.  We are 
>> discussing adding a new variable attribute, ASYNC.  I'd like to get 
>> feedback from the MPI-3 Fortran working group on these possible changes.
>>
>> ASYNC implies that a variable is potentially active in asynchronous 
>> operations outside of Fortran.  The use of this new attribute should 
>> give the compiler enough information to inhibit optimizations similar  
>> to inhibitions involved in Fortran asynchronous I/O, specifically  
>> copyin/copyout and code motion.  The compiler will likely have to  
>> inhibit code motion regarding any use of a variable with the ASYNC  
>> attribute and procedures other than intrinsics.  The affect is 
>> similar  to the use of the volatile attribute (but not regarding loads 
>> and  stores).
>>
>> Usage is outlined below:
>>
>> ----------------
>>
>> real, ASYNC_EXTERNAL :: buf(100,100)
>> type(MPI_Request) :: request
>>
>> ! initiate data transfer of boundary
>> CALL MPI_IRecv(buf(:,1),...,request,...)  ! no copyin/copyout will happen
>>
>> ! do work on interior of the buffer while transfer is in progress
>> CALL UPDATE_INTERIOR(buf)  ! dummy arg should have ASYNC attribute
>>
>> ! wait for the communication to finish
>> CALL MPI_Wait(request,...)   ! no code motion allowed with respect to  
>> buf
>>
>> ! finish work on buffer boundaries
>> CALL UPDATE_BOUNDARIES(buf)  ! should copyin/copyout be allowed here?
>>
>> ----------------
>>
>> So how does this look?  Anything we've left out?  I'm also including 
>> the interface definition.
>>
>> - craig
>>
>> ----------------
>>
>> interface
>>
>> SUBROUTINE MPI_Irecv(buf, count, datatype, source, tag, comm, 
>> request,  err) BIND(C, name="MPI_Irecv")
>>    import MPI_Datatype, MPI_Comm, MPI_Request
>>    TYPE(*), DIMENSION(..), ASYNC   :: buf
>>    integer, value,     intent(in)  :: count
>>    type(MPI_Datatype), intent(in)  :: datatype
>>    integer, value,     intent(in)  :: source
>>    integer, value,     intent(in)  :: tag
>>    type(MPI_Comm),     intent(in)  :: comm
>>    type(MPI_Request),  intent(out) :: request
>>    integer,            intent(out) :: err
>> END SUBROUTINE MPI_Irecv
>>
>> SUBROUTINE MPI_Isend(buf, count, datatype, dest, tag, comm, request, 
>> err) BIND(C, name="MPI_Isend")
>>    import MPI_Datatype, MPI_Comm, MPI_Request
>>    TYPE(*), DIMENSION(..), ASYNC   :: buf
>>    integer, value,     intent(in)  :: count
>>    type(MPI_Datatype), intent(in)  :: datatype
>>    integer, value,     intent(in)  :: dest
>>    integer, value,     intent(in)  :: tag
>>    type(MPI_Comm),     intent(in)  :: comm
>>    type(MPI_Request),  intent(out) :: request
>>    integer,            intent(out) :: err
>> END SUBROUTINE MPI_Isend
>>
>> SUBROUTINE MPI_Wait(request, status, err) BIND(C, name="MPI_Wait")
>>    import MPI_Request, MPI_Status
>>    type(MPI_Request), intent(in)  :: request  !    inout?
>>    type(MPI_Status),  intent(out) :: status
>>    integer,           intent(out) :: err
>> END SUBROUTINE MPI_Wait
>>
>> end interface
>>
>>
>>
>>
>>
>> _______________________________________________
>> mpi3-fortran mailing list
>> mpi3-fortran at lists.mpi-forum.org
>> http:// lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
> 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www. hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http:// lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
> 


-- 
Aleksandar Donev, Ph.D.
Lawrence Postdoctoral Fellow @ LLNL
High Performance Computational Materials Science and Chemistry
E-mail: donev1 at llnl.gov
Phone: (925) 424-6816  Fax: (925) 423-0785
Address: P.O.Box 808, L-367, Livermore, CA 94551-9900
Web: http://cims.nyu.edu/~donev



More information about the mpiwg-fortran mailing list