[MPIWG Fortran] type inference like mpi4py
William Gropp
wgropp at illinois.edu
Mon Nov 29 08:24:13 CST 2021
The issue in the past was that sufficient information to implement this was not specified in Fortran - it required special knowledge of how each Fortran compiler implemented arrays. I admit to being out-of-date on the current Fortran standard. Is this now possible within Fortran? Or would we have to create a new requirement for MPI implementations that the Fortran interface would need to know implementation details of the Fortran compiler(s) used. That might be ok, but it would be a big change and something that we’d need to be explicit about.
Bill
William Gropp
Director, NCSA
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign
IEEE-CS President-Elect
> On Nov 29, 2021, at 4:51 AM, Jeff Hammond via mpiwg-fortran <mpiwg-fortran at lists.mpi-forum.org> wrote:
>
> Recently, I have been writing mpi4py and Fortran 2008 MPI code (https://github.com/ParRes/Kernels/pull/592 <https://urldefense.com/v3/__https://github.com/ParRes/Kernels/pull/592__;!!DZ3fjg!oEX4-LRl7DOwUqMofwtH6VqvIkhOE2e-H0BiyHGjui1-iLLBsZi48CHIySt_ijPKBQ$>), which ends up looking quite similar except for 0-1 base indexing and MPI argument deduction.
>
> Numpy arrays behave a lot like Fortran arrays, including how they store size information in them.
>
> I wonder if it is reasonable to add this same argument inference to MPI Fortran. If I pass an array argument with no type or size information, it should be inferred.
>
> The first inference is type. There is no reason to ask users to specify MPI_DOUBLE_PRECISION when the argument is of type double precision. Obviously, this only works for built-in types, but as that is the common case, why not do it?
>
> The second inference is size. If I pass A(100) to MPI_Bcast, why do I need to say MPI_Bcast(buf=A,count=100,...)? The dope vector for A contains the 100 already.
>
> The hard part here seems to be needing 15 dimensions worth of interfaces, but those are trivial to generate.
>
> Are there any hard problems here that I don't realize?
>
> Thanks,
>
> Jeff
>
> PS code excerpts from the link above. Named arguments would make Fortran even more similar.
>
> for phase in range(0,np):
> recv_from = (me + phase ) % np
> send_to = (me - phase + np) % np
> lo = block_order * send_to
> hi = block_order * (send_to+1)
> comm.Sendrecv(sendbuf=A[lo:hi,:],dest=send_to,sendtag=phase,recvbuf=T,source=recv_from,recvtag=phase)
> lo = block_order * recv_from
> hi = block_order * (recv_from+1)
> B[lo:hi,:] += T.T
>
> do q=0,np-1
> recv_from = mod( (me + q ), np)
> send_to = mod( (me - q + np), np)
> lo = block_order * send_to + 1
> hi = block_order * (send_to+1)
> call MPI_Sendrecv(A(:,lo:hi), block_order*block_order, MPI_DOUBLE_PRECISION, &
> send_to,q, &
> T,block_order*block_order, MPI_DOUBLE_PRECISION, &
> recv_from, q, MPI_COMM_WORLD, MPI_STATUS_IGNORE)
> lo = block_order * recv_from + 1
> hi = block_order * (recv_from+1)
> B(:,lo:hi) = B(:,lo:hi) + transpose(T)
>
> --
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/ <https://urldefense.com/v3/__http://jeffhammond.github.io/__;!!DZ3fjg!oEX4-LRl7DOwUqMofwtH6VqvIkhOE2e-H0BiyHGjui1-iLLBsZi48CHIyStfTmaCwQ$>_______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> https://urldefense.com/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran__;!!DZ3fjg!oEX4-LRl7DOwUqMofwtH6VqvIkhOE2e-H0BiyHGjui1-iLLBsZi48CHIySsKbricRQ$
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20211129/5deec48e/attachment-0001.html>
More information about the mpiwg-fortran
mailing list