[MPIWG Fortran] type inference like mpi4py

Jeff Hammond jeff.science at gmail.com
Mon Nov 29 10:17:00 CST 2021


I figured out some things today. It doesn’t seem so hard now. But Fortran 2018 is required. 

https://github.com/jeffhammond/galaxy-brain (look at gb.F90)

Jeff

Sent from my iPhone

> On Nov 29, 2021, at 5:30 PM, Jeff Squyres (jsquyres) <jsquyres at cisco.com> wrote:
> 
> Jeff H: I'm a little confused by your statement about needing to generate "15 dimensions worth of interfaces" -- but I've been out of Fortran for quite a while now (and I was never an expert to begin with!).
> 
> I thought that "type(*), dimension(..)" was used to solve this issue in the mpi_f08 module.  I.e., you only need to define a single interface with a choice buffer dummy argument type of "type(*), dimension(..)" and not need to have explicit interfaces for all the types / dimensions.  That being said, as you and Bill have pointed out, portability can be (is?) an issue in terms of reading the back-end descriptor.
> 
> Are you proposing to bring these implicit ideas back to the mpi module?  If so, then you have a bit of a problem with the two-choice-buffer APIs (e.g., MPI_Bcast) that leads to an exponential explosion of interfaces.
> 
> Back in 2005, Craig R. and I cited this as a problem for the existing mpi module interfaces.  I'm not going to cite the paper here, because re-reading the paper today, I see a bunch of cringe-worthy things that Past Jeff S. wrote that Present Jeff S. would never write today (!) (e.g., "Fortran 90 interfaces" instead of "the Fortran mpi module").
> 
> Regardless, we wrote this about a problem with the mpi module:
> 
>> Not only must interfaces be defined for arrays of each intrinsic data type, but for each array dimension as well. Depending on the compiler, there may be approximately 15 type / size combinations.  Each of these combinations can be paired with up to a maximum of seven array dimensions. With approximately 50 MPI functions that have one choice buffer, this means that 5,250 interface declarations must be specified (i.e., 15 types × 7 dimensions × 50 functions). Note that this does not include the approximately 25 MPI functions with two choice buffers. This leads to an additional 6.8M interface declarations (i.e., (15× 7 × 25)^2). Currently, no Fortran 90 compilers can compile a module with this many interface functions.
> 
> ________________________________________
> From: mpiwg-fortran <mpiwg-fortran-bounces at lists.mpi-forum.org> on behalf of Jeff Hammond via mpiwg-fortran <mpiwg-fortran at lists.mpi-forum.org>
> Sent: Monday, November 29, 2021 9:41 AM
> To: Wesley Bland
> Cc: Jeff Hammond; MPI-WG Fortran working group
> Subject: Re: [MPIWG Fortran] type inference like mpi4py
> 
> I am in that WG but it is quite busy with at least two different topics already.
> 
> The primary challenge here is, as Bill noted, whether the Fortran language mechanisms for type introspection are sufficient to make this portable.
> 
> Jeff
> 
> On Mon, Nov 29, 2021 at 4:16 PM Wesley Bland <work at wesbland.com<mailto:work at wesbland.com>> wrote:
> I’d suggest that this might be a good discussion for the Languages WG (mpiwg-languages at lists.mpi-forum.org<mailto:mpiwg-languages at lists.mpi-forum.org>). They’re working on these sorts of issues related to any language (Python, C++, Fortran, etc.).
> 
> On Nov 29, 2021, at 4:51 AM, Jeff Hammond via mpiwg-fortran <mpiwg-fortran at lists.mpi-forum.org<mailto:mpiwg-fortran at lists.mpi-forum.org>> wrote:
> 
> Recently, I have been writing mpi4py and Fortran 2008 MPI code (https://github.com/ParRes/Kernels/pull/592), which ends up looking quite similar except for 0-1 base indexing and MPI argument deduction.
> 
> Numpy arrays behave a lot like Fortran arrays, including how they store size information in them.
> 
> I wonder if it is reasonable to add this same argument inference to MPI Fortran.  If I pass an array argument with no type or size information, it should be inferred.
> 
> The first inference is type.  There is no reason to ask users to specify MPI_DOUBLE_PRECISION when the argument is of type double precision.  Obviously, this only works for built-in types, but as that is the common case, why not do it?
> 
> The second inference is size.  If I pass A(100) to MPI_Bcast, why do I need to say MPI_Bcast(buf=A,count=100,...)?  The dope vector for A contains the 100 already.
> 
> The hard part here seems to be needing 15 dimensions worth of interfaces, but those are trivial to generate.
> 
> Are there any hard problems here that I don't realize?
> 
> Thanks,
> 
> Jeff
> 
> PS code excerpts from the link above.  Named arguments would make Fortran even more similar.
> 
> for phase in range(0,np):
>  recv_from = (me + phase ) % np
>  send_to = (me - phase + np) % np
>  lo = block_order * send_to
>  hi = block_order * (send_to+1)
>  comm.Sendrecv(sendbuf=A[lo:hi,:],dest=send_to,sendtag=phase,recvbuf=T,source=recv_from,recvtag=phase)
>  lo = block_order * recv_from
>  hi = block_order * (recv_from+1)
>  B[lo:hi,:] += T.T
> 
> do q=0,np-1
>  recv_from = mod( (me + q     ), np)
>  send_to   = mod( (me - q + np), np)
>  lo = block_order * send_to + 1
>  hi = block_order * (send_to+1)
>  call MPI_Sendrecv(A(:,lo:hi), block_order*block_order, MPI_DOUBLE_PRECISION,    &
>                    send_to,q,                                                    &
>                    T,block_order*block_order, MPI_DOUBLE_PRECISION,              &
>                    recv_from, q, MPI_COMM_WORLD, MPI_STATUS_IGNORE)
>  lo = block_order * recv_from + 1
>  hi = block_order * (recv_from+1)
>  B(:,lo:hi) = B(:,lo:hi) + transpose(T)
> 
> --
> Jeff Hammond
> jeff.science at gmail.com<mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org<mailto:mpiwg-fortran at lists.mpi-forum.org>
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran
> 
> 
> 
> --
> Jeff Hammond
> jeff.science at gmail.com<mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20211129/0a195f30/attachment.html>


More information about the mpiwg-fortran mailing list