<div dir="ltr"><div>I am in that WG but it is quite busy with at least two different topics already.</div><div><br></div><div>The primary challenge here is, as Bill noted, whether the Fortran language mechanisms for type introspection are sufficient to make this portable.</div><div><br></div><div>Jeff</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 29, 2021 at 4:16 PM Wesley Bland <<a href="mailto:work@wesbland.com">work@wesbland.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;">I’d suggest that this might be a good discussion for the Languages WG (<a href="mailto:mpiwg-languages@lists.mpi-forum.org" target="_blank">mpiwg-languages@lists.mpi-forum.org</a>). They’re working on these sorts of issues related to any language (Python, C++, Fortran, etc.).<br><div><br><blockquote type="cite"><div>On Nov 29, 2021, at 4:51 AM, Jeff Hammond via mpiwg-fortran <<a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a>> wrote:</div><br><div><div dir="ltr"><font face="monospace">Recently, I have been writing mpi4py and Fortran 2008 MPI code (<a href="https://github.com/ParRes/Kernels/pull/592" target="_blank">https://github.com/ParRes/Kernels/pull/592</a>), which ends up looking quite similar except for 0-1 base indexing and MPI argument deduction.</font><div><font face="monospace"><br></font></div><div><font face="monospace">Numpy arrays behave a lot like Fortran arrays, including how they store size information in them.<br></font><div><font face="monospace"><br></font></div><div><font face="monospace">I wonder if it is reasonable to add this same argument inference to MPI Fortran. If I pass an array argument with no type or size information, it should be inferred.</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">The first inference is type. There is no reason to ask users to specify MPI_DOUBLE_PRECISION when the argument is of type double precision. Obviously, this only works for built-in types, but as that is the common case, why not do it?</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">The second inference is size. If I pass A(100) to MPI_Bcast, why do I need to say MPI_Bcast(buf=A,count=100,...)? The dope vector for A contains the 100 already.</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">The hard part here seems to be needing 15 dimensions worth of interfaces, but those are trivial to generate.</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">Are there any hard problems here that I don't realize?</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">Thanks,</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">Jeff</font></div><div><font face="monospace"><br></font></div><div><font face="monospace">PS code excerpts from the link above. Named arguments would make Fortran even more similar.</font></div><div><font face="monospace"><br>for phase in range(0,np):<br> recv_from = (me + phase ) % np<br> send_to = (me - phase + np) % np<br> lo = block_order * send_to<br> hi = block_order * (send_to+1)<br> comm.Sendrecv(sendbuf=A[lo:hi,:],dest=send_to,sendtag=phase,recvbuf=T,source=recv_from,recvtag=phase)<br> lo = block_order * recv_from<br> hi = block_order * (recv_from+1)<br> B[lo:hi,:] += T.T</font><div><font face="monospace"><br></font></div><div><div style="margin:0cm"><font face="monospace">do q=0,np-1</font></div><div style="margin:0cm"><font face="monospace"> recv_from =
mod( (me + q ), np)</font></div><div style="margin:0cm"><font face="monospace"> send_to = mod( (me - q + np), np)</font></div><div style="margin:0cm"><font face="monospace"> lo =
block_order * send_to + 1</font></div><div style="margin:0cm"><font face="monospace"> hi =
block_order * (send_to+1)</font></div><div style="margin:0cm"><font face="monospace"> call
MPI_Sendrecv(A(:,lo:hi), block_order*block_order, MPI_DOUBLE_PRECISION, &</font></div><div style="margin:0cm"><font face="monospace"> send_to,q, &</font></div><div style="margin:0cm"><font face="monospace"> T,block_order*block_order, MPI_DOUBLE_PRECISION, &</font></div><div style="margin:0cm"><font face="monospace"> recv_from, q, MPI_COMM_WORLD, MPI_STATUS_IGNORE)</font></div><div style="margin:0cm"><font face="monospace"> lo =
block_order * recv_from + 1</font></div><div style="margin:0cm"><font face="monospace"> hi =
block_order * (recv_from+1)</font></div><div style="margin:0cm"><font face="monospace"> B(:,lo:hi) =
B(:,lo:hi) + transpose(T)</font></div><div><font face="monospace"><br></font></div><font face="monospace">-- <br></font><div dir="ltr"><font face="monospace">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></font></div></div></div></div></div>
_______________________________________________<br>mpiwg-fortran mailing list<br><a href="mailto:mpiwg-fortran@lists.mpi-forum.org" target="_blank">mpiwg-fortran@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran" target="_blank">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran</a><br></div></blockquote></div><br></div></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div></div>