[MPIWG Fortran] MPI_ARECV: Fortran pointers

Rolf Rabenseifner rabenseifner at hlrs.de
Wed Dec 11 10:52:51 CST 2013


Jeff, 

Here the details:

buffer should be named and defined in C, mpi_f08, and old Fortran 
exactly as baseptr in 
 - MPI_ALLOC_MEM
 - MPI_WIN_ALLOCATE
 - MPI_WIN_ALLOCATE_SHARED
 - MPI_WIN_SHARED_QUERY

All 4 routines use the same allocation scheme for Memory 
and the same wording, see
MPI-3.0 page 407, lines 18, 22, 30, 36, some words of 37-43,
mandatory lines 44-48 and page 408 lines 1-15.
Additionally mandatory is page 408 lines 23-24, 1st sentence,
but only "for MPI_ALLOC_MEM".

Best regards
Rolf

----- Original Message -----
> From: "Jeff Squyres (jsquyres)" <jsquyres at cisco.com>
> To: "MPI Fortran WG" <mpiwg-fortran at lists.mpi-forum.org>
> Sent: Wednesday, December 11, 2013 5:09:57 PM
> Subject: [MPIWG Fortran] MPI_ARECV: Fortran pointers
> 
> I have a question about Fortran pointers and a MPI Forum proposal
> about "allocate receive".  Here's the slides from the MPI_ARECV
> proposal, from the Madrid meeting:
> 
>     http://meetings.mpi-forum.org/secretary/2013/09/slides/jsquyres-arecv.pdf
> 
> The issue is this: there is a proposal for something like this (in
> C):
> 
>     MPI_Iarecv(source, tag, comm, &request);
> 
> The application then tests/waits on the request, and when the request
> is complete, it means that the message has been received and MPI has
> allocated a buffer for it (vs. the user providing the buffer).  You
> then make another call to get the [contiguous] buffer from the MPI
> implementation (again, in C):
> 
>     char *buffer;
>     MPI_Status_get_buffer(status, &buffer);
> 
> When the app is done with the buffer, the app gives it back to MPI
> via MPI_FREE_MEM:
> 
>     MPI_Free_mem(buffer);
> 
> So my questions to the Fortran Brain Trust (FBT) are:
> 
> 1. Is this do-able in Fortran?  I think it is, but I am ignorant of
> Fortran pointer issues.
> 
> 2. Can we make an interface in Fortran that is natural / usable by
> normal/average Fortran programmers?
> 
> 3. I'm guessing the above two questions are in the context of the
> mpi_f08 module -- I don't really care about mpif.h, but I *might*
> care about the mpi module...?  (this has deeper implications for
> if/when we want to deprecate mpif.h and/or the mpi module...)
> 
> I'm at the Forum this week.  Does anyone have time to have a call to
> discuss this next week, perchance?  I could setup a doodle to find a
> time.
> 
> --
> Jeff Squyres
> jsquyres at cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
> 

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner at hlrs.de
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)



More information about the mpiwg-fortran mailing list