<div dir="ltr">I agree this is a good temporary fix, but it means that all the machine learning types who want FP16 support now need to learn what Fortran is and that it uses bytes instead of bits when specifying type width. It also means that implementations don't have to support this type if the platform doesn't have a Fortran compiler, which is a nontrivial risk in the machine learning world. Of course, it's also quite easy for implementations to support FP16 without the standard doing anything, and I think everyone should be working on that now. I will probably implement float16 and float128 in BigMPI in the near future.<div><br></div><div>The other issue to consider here is that FP16 support can be achieved in everything except reductions (or accumulate) with 2x MPI_BYTE, and FP16 in the context of reductions is absolutely horrifying. As far as I know, everyone using FP16 today uses FP32 accumulators, and I'd hope that implementers would make some effort to ensure reductions give something near the right answer, likely by internally using FP32. This is related to my ongoing interest in doing something to enable support for Kahan or otherwise enhanced summation in reductions. I've already prototyped this in BigMPI but relying on user-defined reductions has a performance tax, particularly on machines like Blue Gene.<br><div><div><br></div><div>Jeff<br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 21, 2017 at 11:54 PM, William Gropp <span dir="ltr"><<a href="mailto:wgropp@illinois.edu" target="_blank">wgropp@illinois.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div style="word-wrap:break-word">Yes, but note this language about language interoperability, from page 659, lines 4ff:<div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div>If a datatype defined in one language is used for a communication call in another language, then the message sent will be identical to the message that would be sent from the first language: the same communication buffer is accessed, and the same representation conversion is per- formed, if needed. All predefined datatypes can be used in datatype constructors in any language.</div></blockquote><div><br></div><div>There is more on page 628. Some comments on this page are references to systems where one language used 128-bits for a 16-byte double and a different language used 80-bits for a 16 byte double (wasting 6 bytes, but keeping simple alignment rules). </div><div><br></div><div>This means that MPI_REAL2 can be used *now* by MPI implementations if any of the supported compilers provide a 2-byte real format. In the extremely unlikely event that C and Fortran supported different formats of 2-byte floats, MPI_REAL2 would refer to the Fortran version, and a new type would be needed for C.</div><div><br></div><div>Jeff’s idea has merit, particularly if the relevant standard is nearly universally adopted (there used to be many more floating point formats, which is why MPI didn’t require a specific one).</div><div><br></div><div>Bill</div><div><br></div><div><br></div><div><span class=""><br><div>
<div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word"><div style="color:rgb(0,0,0);letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;word-wrap:break-word"><div style="word-wrap:break-word"><div style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">William Gropp<br>Interim Director and Chief Scientist, NCSA<br>Thomas M. Siebel Chair in Computer Science<br>University of Illinois Urbana-Champaign</div><br class="m_-5458169542250684964Apple-interchange-newline"></div></div><br class="m_-5458169542250684964Apple-interchange-newline"></div><br class="m_-5458169542250684964Apple-interchange-newline"><br class="m_-5458169542250684964Apple-interchange-newline">
</div>
<br></span><div><div class="h5"><div><blockquote type="cite"><div>On Jun 22, 2017, at 12:24 AM, Atsushi HORI <<a href="mailto:ahori@riken.jp" target="_blank">ahori@riken.jp</a>> wrote:</div><br class="m_-5458169542250684964Apple-interchange-newline"><div><div>Hello, Bill,<br><br><blockquote type="cite">2017/06/22 14:03、William Gropp <<a href="mailto:wgropp@illinois.edu" target="_blank">wgropp@illinois.edu</a>> wrote;<br><br>On timing, note that MPI already defined optional types. One of these is MPI_REAL2, which is a 2-byte floating point type - that is, FP16. (See p25, line 36, 177, line 2, 540 line 10, and 674 line 38). Was MPI_REAL2 discussed?<br></blockquote><br>I did not notice of that and there was no discussion about MPI_REAL2 (unless I missed).<br><br>I checked 3.1 standard as you pointed out, but MPI_REAL2 is defined as a Fortran type. I believe that there is no C predefined type for FP16 (or 'MPI_HALF') in MPI 3.1. This is just because ISO C does not define FP16 yet. <br><br><br>— from p25, MPI 3.1,<br>MPI requires support of these datatypes, which match the basic datatypes of Fortran and ISO C. Additional MPI datatypes should be provided if the host language has additional data types: MPI_DOUBLE_COMPLEX for double precision complex in Fortran declared to be of type DOUBLE COMPLEX; MPI_REAL2, MPI_REAL4, and MPI_REAL8 for Fortran reals, declared to be of type REAL*2, REAL*4 and REAL*8, respectively; MPI_INTEGER1, MPI_INTEGER2, and MPI_INTEGER4 for Fortran integers, declared to be of type INTEGER*1, INTEGER*2, and INTEGER*4, respectively; etc.<br><br>-----<br>Atsushi HORI<br><a href="mailto:ahori@riken.jp" target="_blank">ahori@riken.jp</a><br><a href="http://aics-sys.riken.jp" target="_blank">http://aics-sys.riken.jp</a><br><br><br><br><br>______________________________<wbr>_________________<br>mpiwg-p2p mailing list<br><a href="mailto:mpiwg-p2p@lists.mpi-forum.org" target="_blank">mpiwg-p2p@lists.mpi-forum.org</a><br><a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a></div></div></blockquote></div><br></div></div></div></div></div><br>______________________________<wbr>_________________<br>
mpiwg-p2p mailing list<br>
<a href="mailto:mpiwg-p2p@lists.mpi-forum.org">mpiwg-p2p@lists.mpi-forum.org</a><br>
<a href="https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p" rel="noreferrer" target="_blank">https://lists.mpi-forum.org/<wbr>mailman/listinfo/mpiwg-p2p</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div></div></div></div>