[MPI3 Fortran] MPI 3 text is ready

Rolf Rabenseifner rabenseifner at hlrs.de
Thu Sep 23 16:13:17 CDT 2010


Nick,

thank you for your example.
I have overseen, that the MPI library must not copy 
contiguous subarrays.

When I understand correctly, the code will fail only if
 - Fred is called with a non-contiguous subarray of wrong size, or
 - MPI_Scatter makes internally a copy of its buffer although
   the buffer is contiguous.

Let us require that MPI_Scatter must not make a internal copy 
based on the data in the dope-vector if the buffer is continguous.

Three examples of correct calls:


 CALL MPI_Comm_procs(procs,error)
 CALL MPI_Comm_rank(rank,error)

A)
 IF (rank == root) THEN
   CALL Fred (array(1:size*procs), size, root)
 ELSE
   CALL Fred (array(1:size), size, root)
 ENDIF

B)
 IF (rank == root) THEN
   CALL Fred (array(1), size, root)
 ELSE
   CALL Fred (array(1), size, root)
 ENDIF

C)
 size=35
 IF (rank == root) THEN
   CALL Fred (array(1:17), size, root)
 ELSE
   CALL Fred (array(1:17), size, root)
 ENDIF

D)
 IF (rank == root) THEN
   CALL Fred (array(1:7*size*procs:7), size, root)
 ELSE
   CALL Fred (array(1:7*size:7), size, root)
 ENDIF

Examples A-C also work with mpif.h and implicit interfaces
as long as the compiler makes a call-by-reference for contiguous arrays.
The array size need not to be correct.

Example D also work with mpif.h as long as the array size is 
really correct, i.e., not to small.
With the dope-vector, it should also work, although the MPI library 
may copy the data, because the array size must be correct.

My conclusion is still:
 - the new methodeworks, 
 - but we need to explicitly state that contiguous arrays must not be copied.

Is this now correct?

Best regards
Rolf
   


----- Original Message -----
> From: "N.M. Maclaren" <nmm1 at cam.ac.uk>
> To: "MPI-3 Fortran working group" <mpi3-fortran at lists.mpi-forum.org>
> Sent: Thursday, September 23, 2010 11:04:01 AM
> Subject: Re: [MPI3 Fortran] MPI 3 text is ready
> On Sep 22 2010, Rolf Rabenseifner wrote:
> >
> >Nick and Craig, are these asumptions fully correct?
> 
> The issue isn't whether the individual assumptions are correct, but
> whether
> they fit together with the existing usage (Fortran and MPI).
> 
> For example, if choice buffers use that mechanism, then the following
> code
> will fail (and is one of the most common ways of using MPI):
> 
> SUBROUTINE Fred (array, size, root)
> INTEGER, INTENT(INOUT) :: array(*)
> INTEGER, INTENT(IN) :: size, root
> INTEGER :: error
> CALL Scatter(array,size,MPI_INTEGER,root,MPI_COMM_WORLD,error)
> END SUBROUTINE Fred
> 
> The failure is fundamental and unfixable in the current design of the
> Interop TR - I proposed an alternative that would not fail, but it was
> felt to be outside the TR's scope. The fix is a source change:
> 
> SUBROUTINE Fred (array, size, root)
> INTEGER, INTENT(INOUT) :: array(*)
> INTEGER, INTENT(IN) :: size, root
> INTEGER :: procs, rank, error
> CALL MPI_Comm_procs(rank,error)
> CALL MPI_Comm_rank(rank,error)
> IF (rank == root) THEN
> CALL Scatter(array(:procs*size),size,MPI_INTEGER, &
> root,MPI_COMM_WORLD,error)
> ELSE
> CALL Scatter(array(:size),size,MPI_INTEGER, &
> root,MPI_COMM_WORLD,error)
> END IF
> END SUBROUTINE Fred
> 
> I assert that that level of incompatibility is a major issue.
> 
> Regards,
> Nick.
> 
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)



More information about the mpiwg-fortran mailing list