-
Notifications
You must be signed in to change notification settings - Fork 7.1k
development plan of "functional_tensor" #2109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, Great questions! The some of the ideas of the existence of Once the transforms work in # in Rotate
if _is_pil_image(input):
return pil_rotate(input, ...)
return tensor_rotate(input, ...)
# define `pil_rotate` with `torch.jit.unused` so that
# it works in torchscript The plan is that all transforms should support tensors as arguments. This means that it will work on both CPU and CUDA, and be torchscriptable). The current missing functions that we need for parity mostly require changes in PyTorch, in order to add Plus, we are also working on adding image decoding operators in torchvision, which will mean that the entire pipeline can be torchscripted and work on torch Tensors. In the long term, the idea would be that with the help of NestedTensor, we would be able to do the preprocessing of batches on the CPU / GPU in parallel in an efficient way, while using the same abstractions. But that is still far in the future. Let me know if you have questions, or if there are aspects of this that interests you and you would like to collaborate on some points. |
Hi @fmassa , Thanks very much for your detailed explanation! And we are evaluating whether it's the right time to re-implement Tensor based transforms to be compatible with existing numpy transforms or replace them directly. We just completed the alpha version and don't have back-compatible problems, do you have any suggestion here? We are facing exactly the same Pros(GPU, tracing, autodiff, etc) and Cons(NCHW, cases not support, etc.) as you shared in #1375 May I know the schedule of your development plan? For example, when do you guys plan to release the compatible version torchvision? Thanks. |
Hi @Nic-Ma We are planning on allowing all transforms in torchvision to have tensor support for next release, which should be in the next 3-4 months. This won't include nested tensor support, but only normal Tensors for now.
I would say that in the future, the tensor APIs might converge and that interoperability will be possible. It's already the case between some operations in PyTorch and numpy, where you can pass torch tensors to numpy functions and get back torch tensors (for example, Let me know if you have further questions. |
Hi @fmassa , Thanks very much for your sharing! |
This work has been finished with the help from @vfdev-5 , and will be present in the next release of torchvision. |
❓ Questions and Help
Hi torchvision team,
This is Nic from NVIDIA, thanks for sharing your great work on data processing solutions!
Actually, I found 2 Tensor only transforms, others are for PIL or numpy.
Or implicitly detect data type in transforms and use "function.py" or "function_tensor.py"?
Thanks in advance.
The text was updated successfully, but these errors were encountered: