[mpiwg-p2p] FP16 Support

Jeff Hammond jeff.science at gmail.com
Thu Jun 22 10:51:15 CDT 2017


I have created https://github.com/mpi-forum/mpi-issues/issues/65.  I will
add a different issue related to IEEE.

Jeff

On Thu, Jun 22, 2017 at 8:18 AM, Jeff Hammond <jeff.science at gmail.com>
wrote:

> I agree this is a good temporary fix, but it means that all the machine
> learning types who want FP16 support now need to learn what Fortran is and
> that it uses bytes instead of bits when specifying type width.  It also
> means that implementations don't have to support this type if the platform
> doesn't have a Fortran compiler, which is a nontrivial risk in the machine
> learning world.  Of course, it's also quite easy for implementations to
> support FP16 without the standard doing anything, and I think everyone
> should be working on that now.  I will probably implement float16 and
> float128 in BigMPI in the near future.
>
> The other issue to consider here is that FP16 support can be achieved in
> everything except reductions (or accumulate) with 2x MPI_BYTE, and FP16 in
> the context of reductions is absolutely horrifying.  As far as I know,
> everyone using FP16 today uses FP32 accumulators, and I'd hope that
> implementers would make some effort to ensure reductions give something
> near the right answer, likely by internally using FP32.  This is related to
> my ongoing interest in doing something to enable support for Kahan or
> otherwise enhanced summation in reductions.  I've already prototyped this
> in BigMPI but relying on user-defined reductions has a performance tax,
> particularly on machines like Blue Gene.
>
> Jeff
>
> On Wed, Jun 21, 2017 at 11:54 PM, William Gropp <wgropp at illinois.edu>
> wrote:
>
>> Yes, but note this language about language interoperability, from page
>> 659, lines 4ff:
>>
>> If a datatype defined in one language is used for a communication call in
>> another language, then the message sent will be identical to the message
>> that would be sent from the first language: the same communication buffer
>> is accessed, and the same representation conversion is per- formed, if
>> needed. All predefined datatypes can be used in datatype constructors in
>> any language.
>>
>>
>> There is more on page 628.  Some comments on this page are references to
>> systems where one language used 128-bits for a 16-byte double and a
>> different language used 80-bits for a 16 byte double (wasting 6 bytes, but
>> keeping simple alignment rules).
>>
>> This means that MPI_REAL2 can be used *now* by MPI implementations if any
>> of the supported compilers provide a 2-byte real format.  In the extremely
>> unlikely event that C and Fortran supported different formats of 2-byte
>> floats, MPI_REAL2 would refer to the Fortran version, and a new type would
>> be needed for C.
>>
>> Jeff’s idea has merit, particularly if the relevant standard is nearly
>> universally adopted (there used to be many more floating point formats,
>> which is why MPI didn’t require a specific one).
>>
>> Bill
>>
>>
>>
>> William Gropp
>> Interim Director and Chief Scientist, NCSA
>> Thomas M. Siebel Chair in Computer Science
>> University of Illinois Urbana-Champaign
>>
>>
>>
>>
>>
>> On Jun 22, 2017, at 12:24 AM, Atsushi HORI <ahori at riken.jp> wrote:
>>
>> Hello, Bill,
>>
>> 2017/06/22 14:03、William Gropp <wgropp at illinois.edu> wrote;
>>
>> On timing, note that MPI already defined optional types.  One of these is
>> MPI_REAL2, which is a 2-byte floating point type - that is, FP16. (See p25,
>> line 36, 177, line 2, 540 line 10, and 674 line 38).  Was MPI_REAL2
>> discussed?
>>
>>
>> I did not notice of that and there was no discussion about MPI_REAL2
>> (unless I missed).
>>
>> I checked 3.1 standard as you pointed out, but MPI_REAL2 is defined as a
>> Fortran type. I believe that there is no C predefined type for FP16 (or
>> 'MPI_HALF') in MPI 3.1. This is just because ISO C does not define FP16
>> yet.
>>
>>
>> — from p25, MPI 3.1,
>> MPI requires support of these datatypes, which match the basic datatypes
>> of Fortran and ISO C. Additional MPI datatypes should be provided if the
>> host language has additional data types: MPI_DOUBLE_COMPLEX for double
>> precision complex in Fortran declared to be of type DOUBLE COMPLEX;
>> MPI_REAL2, MPI_REAL4, and MPI_REAL8 for Fortran reals, declared to be of
>> type REAL*2, REAL*4 and REAL*8, respectively; MPI_INTEGER1, MPI_INTEGER2,
>> and MPI_INTEGER4 for Fortran integers, declared to be of type INTEGER*1,
>> INTEGER*2, and INTEGER*4, respectively; etc.
>>
>> -----
>> Atsushi HORI
>> ahori at riken.jp
>> http://aics-sys.riken.jp
>>
>>
>>
>>
>> _______________________________________________
>> mpiwg-p2p mailing list
>> mpiwg-p2p at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>>
>>
>>
>> _______________________________________________
>> mpiwg-p2p mailing list
>> mpiwg-p2p at lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-p2p
>>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-p2p/attachments/20170622/3139418d/attachment.html>


More information about the mpiwg-p2p mailing list