Skip to content

result_type for explicit promotion #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ezyang opened this issue Nov 16, 2020 · 2 comments
Closed

result_type for explicit promotion #91

ezyang opened this issue Nov 16, 2020 · 2 comments

Comments

@ezyang
Copy link

ezyang commented Nov 16, 2020

Currently the recommendation for code that needs to promote is:

Therefore we had to make the choice to leave the rules for “mixed kind dtype casting” undefined - when users want to write portable code, they should avoid this situation or use explicit casts to obtain the same results from different array libraries.

In eager mode frameworks, making operations portable in this way also can result in extra memory usage, as you have to first do the cast, and then do the operation, whereas the cast might have been fused internally in the framework.

However, it's not clear how important this actually is in practice, since at least in PyTorch, a lot promotion is in fact implemented by first doing a cast and then doing a normal operation (instead of implementing quadratically many kernels). I don't know off the top of my head which operations this would be relevant for.

rgommers added a commit to rgommers/array-api that referenced this issue Dec 16, 2020
@rgommers
Copy link
Member

Thanks @ezyang, good point about performance in eager mode. I'd be inclined to say the damage is limited enough that it's a cost worth paying (for now at least). And if someone finds it to be a bottleneck, that'd be (a) good data, and (b) likely possible to work around with a try-except (only do the explicit cast if mixed int-float raises).

I just tested TensorFlow 2.4.0, and was surprised that tf.experimental.numpy doesn't raise there (regular TF does), it does the same aggressive upcasting to float64 as numpy does it looks like. CuPy also follows NumPy, while JAX does do mixed operations but is a bit better about keeping float32. PyTorch is also implementing mixed int-float casting, and IIRC will do the same as JAX (e.g. int32 + float32 -> float32 rather than float64). So it could well be that when the standard is implemented, there's only subtle differences left. Those are still hard to reconcile, but may be okay-ish to ignore for portability if the application is fine with float32 precision-wise.

result_type also came up in gh-14 and gh-43; gh-99 adds it to the API spec.

@rgommers
Copy link
Member

result_type was added, closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants