[MPIWG Fortran] type inference like mpi4py
Jeff Hammond
jeff.science at gmail.com
Mon Nov 29 08:41:06 CST 2021
I am in that WG but it is quite busy with at least two different topics
already.
The primary challenge here is, as Bill noted, whether the Fortran language
mechanisms for type introspection are sufficient to make this portable.
Jeff
On Mon, Nov 29, 2021 at 4:16 PM Wesley Bland <work at wesbland.com> wrote:
> I’d suggest that this might be a good discussion for the Languages WG (
> mpiwg-languages at lists.mpi-forum.org). They’re working on these sorts of
> issues related to any language (Python, C++, Fortran, etc.).
>
> On Nov 29, 2021, at 4:51 AM, Jeff Hammond via mpiwg-fortran <
> mpiwg-fortran at lists.mpi-forum.org> wrote:
>
> Recently, I have been writing mpi4py and Fortran 2008 MPI code (
> https://github.com/ParRes/Kernels/pull/592), which ends up looking quite
> similar except for 0-1 base indexing and MPI argument deduction.
>
> Numpy arrays behave a lot like Fortran arrays, including how they store
> size information in them.
>
> I wonder if it is reasonable to add this same argument inference to MPI
> Fortran. If I pass an array argument with no type or size information, it
> should be inferred.
>
> The first inference is type. There is no reason to ask users to specify
> MPI_DOUBLE_PRECISION when the argument is of type double precision.
> Obviously, this only works for built-in types, but as that is the common
> case, why not do it?
>
> The second inference is size. If I pass A(100) to MPI_Bcast, why do I
> need to say MPI_Bcast(buf=A,count=100,...)? The dope vector for A contains
> the 100 already.
>
> The hard part here seems to be needing 15 dimensions worth of interfaces,
> but those are trivial to generate.
>
> Are there any hard problems here that I don't realize?
>
> Thanks,
>
> Jeff
>
> PS code excerpts from the link above. Named arguments would make Fortran
> even more similar.
>
> for phase in range(0,np):
> recv_from = (me + phase ) % np
> send_to = (me - phase + np) % np
> lo = block_order * send_to
> hi = block_order * (send_to+1)
>
> comm.Sendrecv(sendbuf=A[lo:hi,:],dest=send_to,sendtag=phase,recvbuf=T,source=recv_from,recvtag=phase)
> lo = block_order * recv_from
> hi = block_order * (recv_from+1)
> B[lo:hi,:] += T.T
>
> do q=0,np-1
> recv_from = mod( (me + q ), np)
> send_to = mod( (me - q + np), np)
> lo = block_order * send_to + 1
> hi = block_order * (send_to+1)
> call MPI_Sendrecv(A(:,lo:hi), block_order*block_order,
> MPI_DOUBLE_PRECISION, &
> send_to,q,
> &
> T,block_order*block_order, MPI_DOUBLE_PRECISION,
> &
> recv_from, q, MPI_COMM_WORLD, MPI_STATUS_IGNORE)
> lo = block_order * recv_from + 1
> hi = block_order * (recv_from+1)
> B(:,lo:hi) = B(:,lo:hi) + transpose(T)
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran
>
>
>
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20211129/c4758d6a/attachment.html>
More information about the mpiwg-fortran
mailing list