You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 22, 2021. It is now read-only.
Float16 or bfloat16 support for loads and stores is missing. I do not expect SIMD extensions to actually do operations on them in native format in hardware (but it would be nice to expose that and emulate transparently), but at least loading and storing (loads that widens into 32-bit floats) should be supported. Maybe not in the initial version, but these are very useful in machine learning and video/photo editing applications.
The text was updated successfully, but these errors were encountered:
Loads and stores are provided as v128 vector sizes by this proposal. Agree that there could be a need for float16 operations in future for these application categories. These needs to be introduced in both scalar and simd in future..
Float16 or bfloat16 support for loads and stores is missing. I do not expect SIMD extensions to actually do operations on them in native format in hardware (but it would be nice to expose that and emulate transparently), but at least loading and storing (loads that widens into 32-bit floats) should be supported. Maybe not in the initial version, but these are very useful in machine learning and video/photo editing applications.
The text was updated successfully, but these errors were encountered: