Skip to content

Question: complex number support in Array API? #102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
leofang opened this issue Dec 18, 2020 · 7 comments · Fixed by #105
Closed

Question: complex number support in Array API? #102

leofang opened this issue Dec 18, 2020 · 7 comments · Fixed by #105
Labels
topic: Complex Data Types Complex number data types.
Milestone

Comments

@leofang
Copy link
Contributor

leofang commented Dec 18, 2020

I am wondering the reason that complex numbers are not considered in the Array API, and if we could give a second thought to make them native dtypes in the API.

The Dataframe API is not considered in the rest of this issue 🙂

I spent quite some time on making sure complex numbers are first-class citizens in CuPy, as many scientific computing applications require using complex numbers. In quantum mechanics, for example, complex numbers are the cornerstones and we can't live without them. Even in some machine learning / deep learning works that we do, either classical or quantum (yes, for those who don't know already there is quantum machine learning 😁), we also need complex numbers in various places like building tensors or communicating with simulations, especially those applying physics-aware neural networks, so it is a great pain to us not being able to build and operate on complex numbers natively.

To date, complex numbers are also an integral part of mainstream programming languages. For example, C has it since C99, and so is C++ (std::complex). Our beloved Python has complex too, so it is just so weird IMHO that when we talk about native dtypes they're being excluded.

As for language extensions to support GPUs, in CUDA we have thrust::complex (which currently supports complex64/complex128) as a clone of std::complex and it is likely that libcu++ will replace Thrust on this aspect, and in ROCm there's also a Thrust clone and native support in HIP, so at least on NVIDIA/AMD GPUs we are good.

Turning to library support, as far as I know

The reason I also mention complex32 above is because CUDA now provides complex32 support in some CUDA libraries like cuBLAS and cuFFT. With special hardware acceleration over float16, it is expected that complex32 can also benefit, see the preliminary FFT test being done in cupy/cupy#4407. Hopefully by having complex number support in ML/DL frameworks (complex64 and complex128 are enough to start) many more applications can be benefited as well.

I am aware that Array API picks DLPack as the primary protocol for zero-copy data exchange, and that it currently lacks complex number support. This is one of the reasons I do not like DLPack. While I will create a separate issue to discuss about alternatives to DLPack, I think revising DLPack's format is fairly straightforward (and should be done asap regardless of the Array API standardization due to the need of ML/DL libraries).

Disclaimer: This issue is merely for my research interests (relevant to my and other colleagues' work) and is not driven by CuPy, one of the Array API stakeholders I will represent.

@leofang
Copy link
Contributor Author

leofang commented Dec 18, 2020

I should add that if we were to support complex numbers, all of the sorting and comparison behaviors (ex: max/min/sort/etc) should follow the ongoing changes being done in NumPy (ex: numpy/numpy#15981 and the linked issues/PRs therein) instead of NumPy's current lexicographic order (ex: numpy/numpy#8151), which is difficult to implement. (For example, in CuPy I had to add additional template specializations for complex numbers to be compatible with NumPy.)

@rgommers
Copy link
Member

Thanks for bringing this up @leofang. You're not the first to ask, so it's good to document this decision and have a summary of the current status.

The main issue is that not all array libraries have good support for complex dtypes yet. Complex numbers are quite important for science, but are only of marginal importance to deep learning. TensorFlow, PyTorch and MXNet all don't have great support yet. When there's such partial support of a feature, we have in some cases chosen to include the feature in the current version of the standard if it was not too difficult for those libraries to implement it. But for complex support, it's a ton of work. Hence it would be a feature that would only be fully supported by ~50% of libraries.

Excluding it from the standard doesn't mean CuPy cannot have it - it just means it is not part of the standard, so shouldn't have those dtypes in the separate array-api-supporting namespace. Which signals to users that as of today one cannot write code that is portable between libraries using complex. They can still use it in CuPy (and NumPy, JAX).

PyTorch's implementation is indeed getting there, but not yet complete. TensorFlow does have an implementation in progress, but is further behind AFAIK. In 12 months from now it should be feasible to add complex64 and complex128 to the standard I believe.

We had a similar discussion about bfloat16 - that's a dtype that deep learning libraries consider much more important than complex64/128, but NumPy doesn't have it and is quite reluctant to add it.

I should add that if we were to support complex numbers, all of the sorting and comparison behaviors (ex: max/min/sort/etc) should follow the ongoing changes being done in NumPy

Yep, that's been a mess for a long time. One thing about this standard is that we try to not do too much innovation; if a feature is still under discussion or being changed in a library, in most cases we should choose wait-and-see (maybe ensuring that libraries don't make incompatible choices), and then only add it to the standard if things have stabilized. So with an issue like "sorting behaviour for complex numbers" I'd choose to leave it out, since sorting isn't all that important for the physics/engineering type applications that need complex numbers.

The reason I also mention complex32 above is because CUDA now provides complex32 support in some CUDA libraries like cuBLAS and cuFFT. With special hardware acceleration over float16, it is expected that complex32 can also benefit

The CuPy implementation seems to use float16, but I would have guessed that the hardware acceleration is for bfloat16. Can it accelerate regular half-precision too?

@leofang
Copy link
Contributor Author

leofang commented Dec 28, 2020

Thanks @rgommers for the detailed summary (and for offline discussion on this) 🙏

The main issue is that not all array libraries have good support for complex dtypes yet. Complex numbers are quite important for science, but are only of marginal importance to deep learning.

I hope my original post gave enough motivation from ML/DL perspective. Quantum stuff aside, while it was not the case in the early days of ML/DL, nowadays some (and increasing) ML/DL applications do need complex numbers. If necessary I could compile a list of references, but I hope that the recent movements in the major ML frameworks as you mentioned self-justify this.

But for complex support, it's a ton of work. Hence it would be a feature that would only be fully supported by ~50% of libraries.

I understand the amount of work needed to be done; I've been there. But from the perspective of supporting native types in the mainstream programming languages (I have C/C++/Fortran/Python/CUDA/HIP in mind), IMHO an incomplete coverage of types in the specification is just weird, as we all know one day we'll be there so why don't we think harder about it now.

From my experience any code implementation leaving complex behind will pay a significant price later for adding the support, so at least code maintainers need to have a clear expectation that this will come one day (and soon).

Perhaps we can at least consider the type conversion rules involving complex so that the libraries that are resourceful to (or already) support complex number can act accordingly, and mark this support experimental in the v1 API (yes, v1, the current one) for others to have time to catch up in v2? I would love to see people committing to a convergence.

Excluding it from the standard doesn't mean CuPy cannot have it - it just means it is not part of the standard, so shouldn't have those dtypes in the separate array-api-supporting namespace. Which signals to users that as of today one cannot write code that is portable between libraries using complex. They can still use it in CuPy (and NumPy, JAX).

If anything, this just strengthens my concern. For codes that require complex, it means it cannot be migrated to the array-api namespace in the next 1~2 years (or longer) even if a subset of participating libraries (say CuPy + JAX) used by the code already support complex. Doesn't look nice...

PyTorch's implementation is indeed getting there, but not yet complete. TensorFlow does have an implementation in progress, but is further behind AFAIK. In 12 months from now it should be feasible to add complex64 and complex128 to the standard I believe.

As mentioned, it is nice to have a rough timeline to expect 🙂

We had a similar discussion about bfloat16 - that's a dtype that deep learning libraries consider much more important than complex64/128, but NumPy doesn't have it and is quite reluctant to add it.

I think in a sense this is justifiable. bfloat16 is not widespread after all, and just like float16 it's not possible AFAIK to use bfloat16 in an end-to-end workflow, but only in hotspots where reduced accuracy is known not to hurt. Most importantly, bfloat16 is not a native type in most of the languages I mentioned above, so its impact is far less than lacking complex IMHO.

The reason I also mention complex32 above is because CUDA now provides complex32 support in some CUDA libraries like cuBLAS and cuFFT. With special hardware acceleration over float16, it is expected that complex32 can also benefit

The CuPy implementation seems to use float16, but I would have guessed that the hardware acceleration is for bfloat16. Can it accelerate regular half-precision too?

No, it uses complex32. You may see float16 here and there, but it's because numpy.complex32 does not exist, so we had to work around in the demo. For complex32, on CUDA CC 6.0 and 7.x we saw an expected 2x speedup in C2C FFT using complex32. I haven't tested bfloat16 yet so I can't comment.

@rgommers
Copy link
Member

I understand the amount of work needed to be done; I've been there. But from the perspective of supporting native types in the mainstream programming languages (I have C/C++/Fortran/Python/CUDA/HIP in mind), IMHO an incomplete coverage of types in the specification is just weird, as we all know one day we'll be there so why don't we think harder about it now.

From my experience any code implementation leaving complex behind will pay a significant price later for adding the support, so at least code maintainers need to have a clear expectation that this will come one day (and soon).

This makes sense, how about we add a prominent note on the "Data Types" page that we do expect to add complex64 and complex128 once library support has caught up?

Perhaps we can at least consider the type conversion rules involving complex so that the libraries that are resourceful to (or already) support complex number can act accordingly, and mark this support experimental in the v1 API (yes, v1, the current one) for others to have time to catch up in v2? I would love to see people committing to a convergence.

The casting rule will be similar to float (and unlikely anyway can sensibly deviate): Python complex -..-..> complex64 ---> complex128. I assume you mean mixed float/complex rules, but those are in the same boat as mixed int/float I'd think.

If anything, this just strengthens my concern. For codes that require complex, it means it cannot be migrated to the array-api namespace in the next 1~2 years (or longer) even if a subset of participating libraries (say CuPy + JAX) used by the code already support complex. Doesn't look nice...

I suspect that the NumPy/CuPy/JAX implementations of the API in a new namespace are still going to accept complex dtypes. Simply because it's significantly more work to raise errors, rather than just alias functions (maybe with a signature change if needed) to existing implementations. So in practice, nothing much will change.

No, it uses complex32. You may see float16 here and there, but it's because numpy.complex32 does not exist, so we had to work around in the demo. For complex32, on CUDA CC 6.0 and 7.x we saw an expected 2x speedup in C2C FFT using complex32. I haven't tested bfloat16 yet so I can't comment.

Interesting, thanks. I can't remember any requests for NumPy to add complex32. It'd be a bit of work, but probably be accepted.

@leofang
Copy link
Contributor Author

leofang commented Dec 29, 2020

This makes sense, how about we add a prominent note on the "Data Types" page that we do expect to add complex64 and complex128 once library support has caught up?

This could be a way to go, certainly! And also add the conversion rules that we expect to work.

Perhaps we can at least consider the type conversion rules involving complex so that the libraries that are resourceful to (or already) support complex number can act accordingly, and mark this support experimental in the v1 API (yes, v1, the current one) for others to have time to catch up in v2? I would love to see people committing to a convergence.

The casting rule will be similar to float (and unlikely anyway can sensibly deviate): Python complex -..-..> complex64 ---> complex128. I assume you mean mixed float/complex rules, but those are in the same boat as mixed int/float I'd think.

Yes, I think the rules for converting within complex and between real and complex should be clear, with one possible exception: We might need to decide what to do with ComplexWarning: Casting complex values to real discards the imaginary part. CuPy currently follows NumPy's behavior whenever possible, unless it does not make sense or we mistakenly drop the ball 😂 But personally I find this warning annoying and should simply be turned to an error. Just my two cents.

I guess the point I tried to make is simply that we need to make complex appear in the Type Promotion Rules section so that people know this is considered and coming and that library providers can act accordingly.

I suspect that the NumPy/CuPy/JAX implementations of the API in a new namespace are still going to accept complex dtypes. Simply because it's significantly more work to raise errors, rather than just alias functions (maybe with a signature change if needed) to existing implementations. So in practice, nothing much will change.

I think in practice this is likely the case indeed. I do see in the Data Types section of the API Standard it states

"A conforming implementation of the array API standard may provide and support additional data types beyond those described in this specification."

so it's probably OK. Thanks.

Interesting, thanks. I can't remember any requests for NumPy to add complex32. It'd be a bit of work, but probably be accepted.

The issue has been raised for quite some time: numpy/numpy#14753. It was mentioned that there'd be a dtype system update, but it was not clear what specifically it referred to. btw, I noticed there's torch.complex32 in PyTorch, but I need to figure out how their type system is implemented (certainly not following NumPy's).

@rgommers
Copy link
Member

We might need to decide what to do with ComplexWarning: Casting complex values to real discards the imaginary part. CuPy currently follows NumPy's behavior whenever possible, unless it does not make sense or we mistakenly drop the ball joy But personally I find this warning annoying and should simply be turned to an error. Just my two cents.

+1 to that. The solution is identical, so better get the explicit error - it's almost always user error anyway.

The issue has been raised for quite some time: numpy/numpy#14753. It was mentioned that there'd be a dtype system update, but it was not clear what specifically it referred to.

https://numpy.org/neps/nep-0041-improved-dtype-support.html
https://numpy.org/neps/nep-0042-new-dtypes.html
https://numpy.org/neps/nep-0043-extensible-ufuncs.html

Large parts of that are landing in NumPy 1.20.0 next month.

btw, I noticed there's torch.complex32 in PyTorch, but I need to figure out how their type system is implemented (certainly not following NumPy's).

some PyTorch casting rules are getting closer, like automatic integer to float promotion implementation for functions that return floats is almost complete and numpy-like.

documentation is a bit limited; casting rule docs are at https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc. implementation under the hood is very different, I like http://blog.ezyang.com/2019/05/pytorch-internals/ as a guide.

rgommers added a commit to rgommers/array-api that referenced this issue Dec 30, 2020
This introduces a new "Future extension" admonition, which
uses the MyST "colon fence" (requires a newer version of MyST).

Closes data-apisgh-102
@rgommers
Copy link
Member

Added the note in gh-105, which should address this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: Complex Data Types Complex number data types.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants