[MPI3 Fortran] [MPI Forum] #69: Fortran interface to prevent/solve register optimization problems

MPI Forum mpi-22 at lists.mpi-forum.org
Sun Dec 14 05:06:37 CST 2008


#69: Fortran interface to prevent/solve register optimization problems
-------------------------------------+--------------------------------------
Reporter:  RolfRabenseifner          |       Owner:  RolfRabenseifner     
    Type:  New routine(s)            |      Status:  new                  
Priority:  Forum feedback requested  |   Milestone:  2008/12/15 Menlo Park
 Version:  MPI 2.2                   |    Keywords:                       
-------------------------------------+--------------------------------------
 == Description ==

 After we have discussed this compile time problem,
 for me, it looks like that the current advices 466:26-468:31
 are okay but awkward:

 {{{
   DD(inbuf);
   MPI_routine_with_invisible_buffers();
     /* e.g., MPI_Wait, or MPI_Send/Recv with MPI_BOTTOM */
   DD(outbuf);
 }}}

 With all MPI_Wait and MPI_test routines, the three call overheads
 (latency!) can be reduced to exactly one with additional buffer
 agrment(s).

 I believe that this method of using the buffer in both calls in the MPI
 I/O split collective routines, is a good idea.

 {{{
   MPI_File...begin(buf,...);
   MPI_File...end(buf,....);
 }}}

 I would like to propose:
  - Special additional fortran bindings with additional
    buffer argument(s) for all MPI_Wait/Test... routines.
    In many cases the several calls can be reduced to one call
    with this/these additional buffer argument(s).
  - Special MPI Fortran routines for substituting the
    user DD routine above. This is necessary for all the MPI_BOTTOM
    problems.

 I would like to add this to MPI-2.2.

 Is it okay that I open an appropriate ticket with this
 proposal that we can discuss it Monday-Wednesday in the MPI-2.2
 slots of the meeting?

 With this, one of the Fortran problems may be solved better than
 in MPI-2.1.

 == History ==

 In Fortran, we have at least to areas that need corrections:
  1. the register optimization prolems,
  1. to allow a Fortran interface description.

 Both problems are independent.[[BR]]
 Problem 1 can be attacked already in MPI-2.2.
 Due to the MPI-2.2 schedule, it seems to be necessary to open
 the ticket now.[[BR]]
 Problem 2 can be solved only together with the Fortran standardization
 body. This is in work by the Fortran subgroup.

 == Proposed Solution ==

 Add additional Fortran Interfaces (not new language independent routines)
 for MPI_WAIT... and MPI_TEST...

 FB stands for "with Fortran Buffer". [[BR]]
 FBM stands for "with Fortran Buffers, Multiple"

 MPI_WAIT_FB (REQUEST, STATUS, IERROR, BUF) [[BR]]
 MPI_TEST_FB (REQUEST, FLAG, STATUS, IERROR, BUF)

 MPI_WAITANY_FB (COUNT, ARRAY_OF_REQUEST, INDEX, STATUS, IERROR, BUF)
 [[BR]]
 MPI_WAITANY_FBM(COUNT, ARRAY_OF_REQUEST, INDEX, STATUS, IERROR, ...)
 [[BR]]
 MPI_TESTANY_FB (COUNT, ARRAY_OF_REQUEST, INDEX, FLAG, STATUS, IERROR, BUF)
 [[BR]]
 MPI_TESTANY_FBM(COUNT, ARRAY_OF_REQUEST, INDEX, FLAG, STATUS, IERROR, ...)

 MPI_WAITALL_FB (COUNT, ARRAY_OF_REQUEST, ARRAY_OF_STATUS, IERROR, BUF)
 [[BR]]
 MPI_WAITALL_FBM(COUNT, ARRAY_OF_REQUEST, ARRAY_OF_STATUS, IERROR, ...)
 [[BR]]
 MPI_TESTALL_FB (COUNT, ARRAY_OF_REQUEST, FLAG, ARRAY_OF_STATUS, IERROR,
 BUF) [[BR]]
 MPI_TESTALL_FBM(COUNT, ARRAY_OF_REQUEST, FLAG, ARRAY_OF_STATUS, IERROR,
 ...)

 MPI_WAITSOME_FB (INCOUNT, ARRAY_OF_REQUEST, OUTCOUNT, ARRAY_OF_INDICES,
 ARRAY_OF_STATUS, IERROR, BUF) [[BR]]
 MPI_WAITSOME_FBM(INCOUNT, ARRAY_OF_REQUEST, OUTCOUNT, ARRAY_OF_INDICES,
 ARRAY_OF_STATUS, IERROR, ...) [[BR]]
 MPI_TESTSOME_FB (INCOUNT, ARRAY_OF_REQUEST, OUTCOUNT, ARRAY_OF_INDICES,
 ARRAY_OF_STATUS, IERROR, BUF) [[BR]]
 MPI_TESTSOME_FBM(INCOUNT, ARRAY_OF_REQUEST, OUTCOUNT, ARRAY_OF_INDICES,
 ARRAY_OF_STATUS, IERROR, ...)

 New Fortran interface routines:

 MPI_INBUF_FB (BUF) [[BR]]
 MPI_OUTBUF_FB (BUF) [[BR]]
 MPI_INOUTBUF_FB (BUF) [[BR]]
 MPI_INBUF_FBM(...) [[BR]]
 MPI_OUTBUF_FBM(...) [[BR]]
 MPI_INOUTBUF_FBM(...)

 == Impact on Implementations ==

 Small. Dummy routines or alias routines must be added. The C
 implementation of the alias routines is identical to the existing
 MPI_Wait/Test... routines.

 == Impact on Applications / Users ==

 No impact to existing applications.[[BR]]
 Future applications can be implemented with methods clearly documented in
 MPI-2.2 and with less latency overhead.

 == Alternative Solutions ==

 To stay with current hack in MPI-2.1 466:26-468:31.

 == Entry for the Change Log ==

 To be done!

-- 
Ticket URL: <https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/69>
MPI Forum <https://svn.mpi-forum.org/>
MPI Forum




More information about the mpiwg-fortran mailing list