[MPI3 Fortran] ASYNC attribute / SYNC_POINT

Rolf Rabenseifner rabenseifner at hlrs.de
Thu May 7 02:24:28 CDT 2009


Hi all,

yes, I agree with your argument and therefore,
I withdraw my SYNC_POINT proposal.

Best regards
Rolf

On Wed, 06 May 2009 14:21:43 -0700
  Aleksandar Donev <donev1 at llnl.gov> wrote:
> We had a discussion about this option and concluded that it is not 
>necessary to mark the actual routines like WAIT, i.e., we concluded 
>that SYNC_POINT was not necessary. In fact, it is problematic because 
>it would require putting it on any routine that itself calls 
>MPI_Wait. Otherwise, what exactly is its point?
> 
> The compiler simply must not move accesses to ASYNC variables around 
>procedure calls it does not know enough about. This is already so for 
>TARGET, ASYNCHRONOUS, and VOLATILE.
> 
> Aleks
> 
> Rolf Rabenseifner wrote:
>> I try to write my ASYNC/SYNC proposal in a similar way as
>> Craigs proposal.
>> 
>> ASYNC implies that a variable or a portion of it (e.g., a subarray) 
>>is 
>> potentially active in asynchronous operations outside of Fortran.
>> Whether it is used asynchronously or which portion is used 
>> asynchronously, can be changed at each synchronization point.
>> A synchronization point is either a routine call to a routine with 
>>the 
>> ASYNC variable as an actual argument, or a call to a routine that 
>>has an 
>> explicit interface with the SYNC_POINT option.
>> 
>> If a variable is declare as ASYNC within a block, then within this 
>>block
>> - motion of code involving this ASYNC variable would be suppressed 
>> across any synchronization point,
>> - between two synchronization points, only those portions of the 
>>ASYNC 
>> variable can be written to the memory that are explicitly modified 
>>by 
>> the application program (i.e., caching of other portions, 
>>overwriting 
>> and restoring them for, e.g., loop fusion is prohibited),
>> - if it is an actual argument in a call to a function or subroutine 
>>with 
>> a explicit interface then copyin/out of this argument is prohibited.
>> 
>> If a dummy argument is declared as ASYNC in an explicit interface, 
>>then 
>> copyin/out of actual arguments is prohibited.
>> 
>> Comments:
>> - This is less restrictive than VOLATILE. For numerical operations, 
>> registers can be used as usual. Same data that is used in several 
>> iterations of a loop can be efficiently cached in registers 
>>independent 
>> of whether ASYNC is set or not.
>> 
>> - All MPI WAIT, TEST and communication routines (because of usage of 
>> MPI_BOTTOM and absolute addresses in MPI derived datatypes) must be 
>> declared with SYNC_POINT option.
>> 
>> Usage:
>> 
>> ----------------
>> 
>> real, ASYNC :: buf(100,100)
>> type(MPI_Request) :: request
>> 
>> ! initiate data transfer of boundary
>> CALL MPI_IRecv(buf(:,1),...,request,...) ! no copyin/copyout will 
>>happen
>> 
>> ! do work on interior of the buffer while transfer is in progress
>> CALL UPDATE_INTERIOR(buf) ! dummy arg should have ASYNC attribute
>> 
>> ! wait for the communication to finish
>> CALL MPI_Wait(request,...) ! no code motion allowed with respect to 
>>buf
>> 
>> ! finish work on buffer boundaries
>> CALL UPDATE_BOUNDARIES(buf) ! should copyin/copyout be allowed here?
>> 
>> ----------------
>> 
>> interface
>> 
>> SUBROUTINE UPDATE_INTERIOR(buf)
>>    TYPE(*), DIMENSION(..), ASYNC :: buf
>>    ! ASYNC is necessary to guarantee that
>>    ! only those portions of buf are written to
>>    ! the memory that are explicitly modified
>>    ! by UPDATE_INTERIOR
>> END SUBROUTINE UPDATE_INTERIOR
>> 
>> SUBROUTINE UPDATE_BOUNDARIES(buf)
>>    TYPE(*), DIMENSION(..) :: buf
>> END SUBROUTINE UPDATE_BOUNDARIES
>> 
>> 
>> SUBROUTINE MPI_Irecv(buf, count, datatype, source, tag, comm, 
>>request, 
>> err) SYNC_POINT BIND(C, name="MPI_Irecv")
>>    import MPI_Datatype, MPI_Comm, MPI_Request
>>    TYPE(*), DIMENSION(..), ASYNC :: buf
>>    integer, value, intent(in) :: count
>>    type(MPI_Datatype), intent(in) :: datatype
>>    integer, value, intent(in) :: source
>>    integer, value, intent(in) :: tag
>>    type(MPI_Comm), intent(in) :: comm
>>    type(MPI_Request), intent(out) :: request
>>    integer, intent(out) :: err
>> END SUBROUTINE MPI_Irecv
>> 
>> SUBROUTINE MPI_Isend(buf, count, datatype, dest, tag, comm, request, 
>> err) SYNC_POINT BIND(C, name="MPI_Isend")
>>    import MPI_Datatype, MPI_Comm, MPI_Request
>>    TYPE(*), DIMENSION(..), ASYNC :: buf
>>    integer, value, intent(in) :: count
>>    type(MPI_Datatype), intent(in) :: datatype
>>    integer, value, intent(in) :: dest
>>    integer, value, intent(in) :: tag
>>    type(MPI_Comm), intent(in) :: comm
>>    type(MPI_Request), intent(out) :: request
>>    integer, intent(out) :: err
>> END SUBROUTINE MPI_Isend
>> 
>> SUBROUTINE MPI_Wait(request, status, err) SYNC_POINT BIND(C, 
>> name="MPI_Wait")
>>    import MPI_Request, MPI_Status
>>    type(MPI_Request), intent(in) :: request ! inout?
>>    type(MPI_Status), intent(out) :: status
>>    integer, intent(out) :: err
>> END SUBROUTINE MPI_Wait
>> 
>> end interface
>> 
>> --------------
>> 
>> Any comments?
>> 
>> Best regards
>> Rolf
>> 
>> 
>> 
>> 
>> 
>> On Tue, 5 May 2009 17:45:00 -0600
>>  Craig Rasmussen <crasmussen at newmexicoconsortium.org> wrote:
>>> I'm at the WG5 Fortran standards meeting and I think we've made 
>>> significant advances regarding the MPI-3 Fortran API.  We are 
>>> discussing adding a new variable attribute, ASYNC.  I'd like to get 
>>> feedback from the MPI-3 Fortran working group on these possible 
>>>changes.
>>>
>>> ASYNC implies that a variable is potentially active in asynchronous 
>>> operations outside of Fortran.  The use of this new attribute should 
>>> give the compiler enough information to inhibit optimizations 
>>>similar  
>>> to inhibitions involved in Fortran asynchronous I/O, specifically  
>>> copyin/copyout and code motion.  The compiler will likely have to  
>>> inhibit code motion regarding any use of a variable with the ASYNC  
>>> attribute and procedures other than intrinsics.  The affect is 
>>> similar  to the use of the volatile attribute (but not regarding 
>>>loads 
>>> and  stores).
>>>
>>> Usage is outlined below:
>>>
>>> ----------------
>>>
>>> real, ASYNC_EXTERNAL :: buf(100,100)
>>> type(MPI_Request) :: request
>>>
>>> ! initiate data transfer of boundary
>>> CALL MPI_IRecv(buf(:,1),...,request,...)  ! no copyin/copyout will 
>>>happen
>>>
>>> ! do work on interior of the buffer while transfer is in progress
>>> CALL UPDATE_INTERIOR(buf)  ! dummy arg should have ASYNC attribute
>>>
>>> ! wait for the communication to finish
>>> CALL MPI_Wait(request,...)   ! no code motion allowed with respect 
>>>to  
>>> buf
>>>
>>> ! finish work on buffer boundaries
>>> CALL UPDATE_BOUNDARIES(buf)  ! should copyin/copyout be allowed 
>>>here?
>>>
>>> ----------------
>>>
>>> So how does this look?  Anything we've left out?  I'm also including 
>>> the interface definition.
>>>
>>> - craig
>>>
>>> ----------------
>>>
>>> interface
>>>
>>> SUBROUTINE MPI_Irecv(buf, count, datatype, source, tag, comm, 
>>> request,  err) BIND(C, name="MPI_Irecv")
>>>    import MPI_Datatype, MPI_Comm, MPI_Request
>>>    TYPE(*), DIMENSION(..), ASYNC   :: buf
>>>    integer, value,     intent(in)  :: count
>>>    type(MPI_Datatype), intent(in)  :: datatype
>>>    integer, value,     intent(in)  :: source
>>>    integer, value,     intent(in)  :: tag
>>>    type(MPI_Comm),     intent(in)  :: comm
>>>    type(MPI_Request),  intent(out) :: request
>>>    integer,            intent(out) :: err
>>> END SUBROUTINE MPI_Irecv
>>>
>>> SUBROUTINE MPI_Isend(buf, count, datatype, dest, tag, comm, request, 
>>> err) BIND(C, name="MPI_Isend")
>>>    import MPI_Datatype, MPI_Comm, MPI_Request
>>>    TYPE(*), DIMENSION(..), ASYNC   :: buf
>>>    integer, value,     intent(in)  :: count
>>>    type(MPI_Datatype), intent(in)  :: datatype
>>>    integer, value,     intent(in)  :: dest
>>>    integer, value,     intent(in)  :: tag
>>>    type(MPI_Comm),     intent(in)  :: comm
>>>    type(MPI_Request),  intent(out) :: request
>>>    integer,            intent(out) :: err
>>> END SUBROUTINE MPI_Isend
>>>
>>> SUBROUTINE MPI_Wait(request, status, err) BIND(C, name="MPI_Wait")
>>>    import MPI_Request, MPI_Status
>>>    type(MPI_Request), intent(in)  :: request  !    inout?
>>>    type(MPI_Status),  intent(out) :: status
>>>    integer,           intent(out) :: err
>>> END SUBROUTINE MPI_Wait
>>>
>>> end interface
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> mpi3-fortran mailing list
>>> mpi3-fortran at lists.mpi-forum.org
>>> http:// lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
>> 
>> Dr. Rolf Rabenseifner . . . . . . . . . .. email 
>>rabenseifner at hlrs.de
>> High Performance Computing Center (HLRS) . phone 
>>++49(0)711/685-65530
>> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 
>>685-65832
>> Head of Dpmt Parallel Computing . . . www. 
>>hlrs.de/people/rabenseifner
>> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
>> _______________________________________________
>> mpi3-fortran mailing list
>> mpi3-fortran at lists.mpi-forum.org
>> http:// lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
>> 
> 
> 
> -- 
> Aleksandar Donev, Ph.D.
> Lawrence Postdoctoral Fellow @ LLNL
> High Performance Computational Materials Science and Chemistry
> E-mail: donev1 at llnl.gov
> Phone: (925) 424-6816  Fax: (925) 423-0785
> Address: P.O.Box 808, L-367, Livermore, CA 94551-9900
> Web: http://cims.nyu.edu/~donev
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran

Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)



More information about the mpiwg-fortran mailing list