Skip to content

16-bit floating-point support for C/C++ #65

Open
@jeffhammond

Description

@jeffhammond

Problem

There is interest in supporting 16-bit floating point (henceforth FP16) in MPI.

See https://lists.mpi-forum.org/pipermail/mpiwg-p2p/2017-June/thread.html

Proposal

Add a type associated with FP16 that does not depend on the Fortran definition (MPI_REAL2).

See references. Various non-standard names for FP16 including __fp16 and short float. The candidate ISO name is _Float16. It may be prudent for MPI to add a type (along the lines of MPI_Count and MPI_Aint) since ISO C and C++ have not standardized names yet and they may not be identical; the typedef would be MPI_Float16, which it may be deprecated as soon as there is an ISO C/C++ name.

Changes to the Text

TODO

Impact on Implementations

The implementation of FP16 is straightforward, following whatever code exists for MPI_REAL2 today, or by copying code for FP32 with s/32/16/g.

A high-quality implementation may need to use special care when implementing reduction operators that can lose precision.

Impact on Users

FP16 support will be available independent of anything related to Fortran.

Users working on machine learning do not use Fortran anywhere (except perhaps indirectly in BLAS) and are not likely to be satisfied with MPI_REAL2, particularly since an implementation can omit support for it if a Fortran compiler is not present.

References

Metadata

Metadata

Assignees

Labels

mpi-6For inclusion in the MPI 5.1 or 6.0 standardwg-p2pPoint-to-Point Working Group

Type

No type

Projects

Status

To Do

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions