<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">I’d suggest that this might be a good discussion for the Languages WG (<a href="mailto:mpiwg-languages@lists.mpi-forum.org" class="">mpiwg-languages@lists.mpi-forum.org</a>). They’re working on these sorts of issues related to any language (Python, C++, Fortran, etc.).<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Nov 29, 2021, at 4:51 AM, Jeff Hammond via mpiwg-fortran <<a href="mailto:mpiwg-fortran@lists.mpi-forum.org" class="">mpiwg-fortran@lists.mpi-forum.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><font face="monospace" class="">Recently, I have been writing mpi4py and Fortran 2008 MPI code (<a href="https://github.com/ParRes/Kernels/pull/592" class="">https://github.com/ParRes/Kernels/pull/592</a>), which ends up looking quite similar except for 0-1 base indexing and MPI argument deduction.</font><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">Numpy arrays behave a lot like Fortran arrays, including how they store size information in them.<br class=""></font><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">I wonder if it is reasonable to add this same argument inference to MPI Fortran. If I pass an array argument with no type or size information, it should be inferred.</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">The first inference is type. There is no reason to ask users to specify MPI_DOUBLE_PRECISION when the argument is of type double precision. Obviously, this only works for built-in types, but as that is the common case, why not do it?</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">The second inference is size. If I pass A(100) to MPI_Bcast, why do I need to say MPI_Bcast(buf=A,count=100,...)? The dope vector for A contains the 100 already.</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">The hard part here seems to be needing 15 dimensions worth of interfaces, but those are trivial to generate.</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">Are there any hard problems here that I don't realize?</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">Thanks,</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">Jeff</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><font face="monospace" class="">PS code excerpts from the link above. Named arguments would make Fortran even more similar.</font></div><div class=""><font face="monospace" class=""><br class="">for phase in range(0,np):<br class=""> recv_from = (me + phase ) % np<br class=""> send_to = (me - phase + np) % np<br class=""> lo = block_order * send_to<br class=""> hi = block_order * (send_to+1)<br class=""> comm.Sendrecv(sendbuf=A[lo:hi,:],dest=send_to,sendtag=phase,recvbuf=T,source=recv_from,recvtag=phase)<br class=""> lo = block_order * recv_from<br class=""> hi = block_order * (recv_from+1)<br class=""> B[lo:hi,:] += T.T</font><div class=""><font face="monospace" class=""><br class=""></font></div><div class=""><div style="margin: 0cm;" class=""><font face="monospace" class="">do q=0,np-1</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> recv_from =
mod( (me + q ), np)</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> send_to = mod( (me - q + np), np)</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> lo =
block_order * send_to + 1</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> hi =
block_order * (send_to+1)</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> call
MPI_Sendrecv(A(:,lo:hi), block_order*block_order, MPI_DOUBLE_PRECISION, &</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> send_to,q, &</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> T,block_order*block_order, MPI_DOUBLE_PRECISION, &</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> recv_from, q, MPI_COMM_WORLD, MPI_STATUS_IGNORE)</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> lo =
block_order * recv_from + 1</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> hi =
block_order * (recv_from+1)</font></div><div style="margin: 0cm;" class=""><font face="monospace" class=""> B(:,lo:hi) =
B(:,lo:hi) + transpose(T)</font></div><div class=""><font face="monospace" class=""><br class=""></font></div><font face="monospace" class="">-- <br class=""></font><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><font face="monospace" class="">Jeff Hammond<br class=""><a href="mailto:jeff.science@gmail.com" target="_blank" class="">jeff.science@gmail.com</a><br class=""><a href="http://jeffhammond.github.io/" target="_blank" class="">http://jeffhammond.github.io/</a></font></div></div></div></div></div>
_______________________________________________<br class="">mpiwg-fortran mailing list<br class=""><a href="mailto:mpiwg-fortran@lists.mpi-forum.org" class="">mpiwg-fortran@lists.mpi-forum.org</a><br class="">https://lists.mpi-forum.org/mailman/listinfo/mpiwg-fortran<br class=""></div></blockquote></div><br class=""></body></html>