Description
Problem
There is interest in supporting 16-bit floating point (henceforth FP16) in MPI.
See https://lists.mpi-forum.org/pipermail/mpiwg-p2p/2017-June/thread.html
Proposal
Add a type associated with FP16 that does not depend on the Fortran definition (MPI_REAL2
).
See references. Various non-standard names for FP16 including __fp16
and short float
. The candidate ISO name is _Float16
. It may be prudent for MPI to add a type (along the lines of MPI_Count
and MPI_Aint
) since ISO C and C++ have not standardized names yet and they may not be identical; the typedef would be MPI_Float16
, which it may be deprecated as soon as there is an ISO C/C++ name.
Changes to the Text
TODO
Impact on Implementations
The implementation of FP16 is straightforward, following whatever code exists for MPI_REAL2
today, or by copying code for FP32 with s/32/16/g
.
A high-quality implementation may need to use special care when implementing reduction operators that can lose precision.
Impact on Users
FP16 support will be available independent of anything related to Fortran.
Users working on machine learning do not use Fortran anywhere (except perhaps indirectly in BLAS) and are not likely to be satisfied with MPI_REAL2
, particularly since an implementation can omit support for it if a Fortran compiler is not present.
References
- Half-precision floating-point format on Wikipedia.
- ISO/IEC JTC 1/SC 22/WG14 N1945 (ISO C proposal)
- ISO/IEC JTC1 SC22 WG14 N2017 (ISO C++ proposal)
- GCC documentation for Half-Precision Floating Point and Additional Floating Types (e.g.
_Float16
) - Clang/LLVM
_Float16
support for C/C++ commit - Intel® Half-Precision Floating-Point Format Conversion Instructions
- Performance Benefits of Half Precision Floats
Metadata
Metadata
Assignees
Type
Projects
Status