[MPI3 Fortran] Register optimization problems
Jeff Squyres
jsquyres at cisco.com
Sat Dec 13 15:19:16 CST 2008
On Dec 13, 2008, at 10:15 AM, Rolf Rabenseifner wrote:
> After we have discussed this compile time problem,
> for me, it looks like that the current advices 466:26-468:31
> are okay but awkward:
>
> DD(inbuf);
> MPI_routine_with_invisible_buffers();
> /* e.g., MPI_Wait, or MPI_Send/Recv with MPI_BOTTOM */
> DD(outbuf);
>
> I believe that the method of duplicate buffer in MPI I/O split
> collective routines
> MPI_File...begin(buf,...);
> MPI_File...end(buf,....);
>
> Is a good idea.
> I would like to propose:
> - Special additional fortran bindings with one additional buffer
> buffer argument for all MPI_Wait/Test routines.
> In many cases the several calls can be reduced to one call
> with this additional buffer argument.
What about the case where the req is parameterized? (i.e., general
application code where the req may be variable) And what about the
array versions of test/wait -- they would require multiple buffer
arguments...?
> - A special MPI Fortran routine for substituting the
> user DD routine above. This is necessary for all the MPI_BOTTOM
> problems.
>
> I would like to add this to MPI-2.2
>
> Is it okay that I open an appropriate ticket with this
> proposal that we can discuss it Monday-Wednesday in the MPI-2.2
> slots of the meeting?
>
> With this, one of the Fortran problems may be solved better than
> in MPI-2.1.
>
> Best regards
> Rolf
>
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> _______________________________________________
> mpi3-fortran mailing list
> mpi3-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
--
Jeff Squyres
Cisco Systems
More information about the mpiwg-fortran
mailing list