<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto">I figured out some things today. It doesn’t seem so hard now. But Fortran 2018 is required. <div><br></div><div><a href="https://github.com/jeffhammond/galaxy-brain">https://github.com/jeffhammond/galaxy-brain</a> (look at gb.F90)</div><div><br></div><div>Jeff<br><br><div dir="ltr">Sent from my iPhone</div><div dir="ltr"><br><blockquote type="cite">On Nov 29, 2021, at 5:30 PM, Jeff Squyres (jsquyres) <jsquyres@cisco.com> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><span>Jeff H: I'm a little confused by your statement about needing to generate "15 dimensions worth of interfaces" -- but I've been out of Fortran for quite a while now (and I was never an expert to begin with!).</span><br><span></span><br><span>I thought that "type(*), dimension(..)" was used to solve this issue in the mpi_f08 module. I.e., you only need to define a single interface with a choice buffer dummy argument type of "type(*), dimension(..)" and not need to have explicit interfaces for all the types / dimensions. That being said, as you and Bill have pointed out, portability can be (is?) an issue in terms of reading the back-end descriptor.</span><br><span></span><br><span>Are you proposing to bring these implicit ideas back to the mpi module? If so, then you have a bit of a problem with the two-choice-buffer APIs (e.g., MPI_Bcast) that leads to an exponential explosion of interfaces.</span><br><span></span><br><span>Back in 2005, Craig R. and I cited this as a problem for the existing mpi module interfaces. I'm not going to cite the paper here, because re-reading the paper today, I see a bunch of cringe-worthy things that Past Jeff S. wrote that Present Jeff S. would never write today (!) (e.g., "Fortran 90 interfaces" instead of "the Fortran mpi module").</span><br><span></span><br><span>Regardless, we wrote this about a problem with the mpi module:</span><br><span></span><br><blockquote type="cite"><span>Not only must interfaces be defined for arrays of each intrinsic data type, but for each array dimension as well. Depending on the compiler, there may be approximately 15 type / size combinations. Each of these combinations can be paired with up to a maximum of seven array dimensions. With approximately 50 MPI functions that have one choice buffer, this means that 5,250 interface declarations must be specified (i.e., 15 types × 7 dimensions × 50 functions). Note that this does not include the approximately 25 MPI functions with two choice buffers. This leads to an additional 6.8M interface declarations (i.e., (15× 7 × 25)^2). Currently, no Fortran 90 compilers can compile a module with this many interface functions.</span><br></blockquote><span></span><br><span>________________________________________</span><br><span>From: mpiwg-fortran <mpiwg-fortran-bounces@lists.mpi-forum.org> on behalf of Jeff Hammond via mpiwg-fortran <mpiwg-fortran@lists.mpi-forum.org></span><br><span>Sent: Monday, November 29, 2021 9:41 AM</span><br><span>To: Wesley Bland</span><br><span>Cc: Jeff Hammond; MPI-WG Fortran working group</span><br><span>Subject: Re: [MPIWG Fortran] type inference like mpi4py</span><br><span></span><br><span>I am in that WG but it is quite busy with at least two different topics already.</span><br><span></span><br><span>The primary challenge here is, as Bill noted, whether the Fortran language mechanisms for type introspection are sufficient to make this portable.</span><br><span></span><br><span>Jeff</span><br><span></span><br><span>On Mon, Nov 29, 2021 at 4:16 PM Wesley Bland <work@wesbland.com<mailto:work@wesbland.com>> wrote:</span><br><span>I’d suggest that this might be a good discussion for the Languages WG (mpiwg-languages@lists.mpi-forum.org<mailto:mpiwg-languages@lists.mpi-forum.org>). They’re working on these sorts of issues related to any language (Python, C++, Fortran, etc.).</span><br><span></span><br><span>On Nov 29, 2021, at 4:51 AM, Jeff Hammond via mpiwg-fortran <mpiwg-fortran@lists.mpi-forum.org<mailto:mpiwg-fortran@lists.mpi-forum.org>> wrote:</span><br><span></span><br><span>Recently, I have been writing mpi4py and Fortran 2008 MPI code (https://github.com/ParRes/Kernels/pull/592), which ends up looking quite similar except for 0-1 base indexing and MPI argument deduction.</span><br><span></span><br><span>Numpy arrays behave a lot like Fortran arrays, including how they store size information in them.</span><br><span></span><br><span>I wonder if it is reasonable to add this same argument inference to MPI Fortran. If I pass an array argument with no type or size information, it should be inferred.</span><br><span></span><br><span>The first inference is type. There is no reason to ask users to specify MPI_DOUBLE_PRECISION when the argument is of type double precision. Obviously, this only works for built-in types, but as that is the common case, why not do it?</span><br><span></span><br><span>The second inference is size. If I pass A(100) to MPI_Bcast, why do I need to say MPI_Bcast(buf=A,count=100,...)? The dope vector for A contains the 100 already.</span><br><span></span><br><span>The hard part here seems to be needing 15 dimensions worth of interfaces, but those are trivial to generate.</span><br><span></span><br><span>Are there any hard problems here that I don't realize?</span><br><span></span><br><span>Thanks,</span><br><span></span><br><span>Jeff</span><br><span></span><br><span>PS code excerpts from the link above. Named arguments would make Fortran even more similar.</span><br><span></span><br><span>for phase in range(0,np):</span><br><span> recv_from = (me + phase ) % np</span><br><span> send_to = (me - phase + np) % np</span><br><span> lo = block_order * send_to</span><br><span> hi = block_order * (send_to+1)</span><br><span> comm.Sendrecv(sendbuf=A[lo:hi,:],dest=send_to,sendtag=phase,recvbuf=T,source=recv_from,recvtag=phase)</span><br><span> lo = block_order * recv_from</span><br><span> hi = block_order * (recv_from+1)</span><br><span> B[lo:hi,:] += T.T</span><br><span></span><br><span>do q=0,np-1</span><br><span> recv_from = mod( (me + q ), np)</span><br><span> send_to = mod( (me - q + np), np)</span><br><span> lo = block_order * send_to + 1</span><br><span> hi = block_order * (send_to+1)</span><br><span> call MPI_Sendrecv(A(:,lo:hi), block_order*block_order, MPI_DOUBLE_PRECISION, &</span><br><span> send_to,q, &</span><br><span> T,block_order*block_order, MPI_DOUBLE_PRECISION, &</span><br><span> recv_from, q, MPI_COMM_WORLD, MPI_STATUS_IGNORE)</span><br><span> lo = block_order * recv_from + 1</span><br><span> hi = block_order * (recv_from+1)</span><br><span> B(:,lo:hi) = B(:,lo:hi) + transpose(T)</span><br><span></span><br><span>--</span><br><span>Jeff Hammond</span><br><span>jeff.science@gmail.com<mailto:jeff.science@gmail.com></span><br><span>http://jeffhammond.github.io/</span><br><span>_______________________________________________</span><br><span>mpiwg-fortran mailing list</span><br><span>mpiwg-fortran@lists.mpi-forum.org<mailto:mpiwg-fortran@lists.mpi-forum.org></span><br><span>https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran</span><br><span></span><br><span></span><br><span></span><br><span>--</span><br><span>Jeff Hammond</span><br><span>jeff.science@gmail.com<mailto:jeff.science@gmail.com></span><br><span>http://jeffhammond.github.io/</span><br></div></blockquote></div></body></html>