-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Moving normalize, erase to functional_tensor.py #1474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
cc: @fmassa , what are your thoughts on this ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
I think that the state of functional_tensor.py
as it currently stands is temporary.
We should only expose to the users the torchvision.transforms.functional
namespace, and the functional_tensor.py
is an implementation detail.
What I'm thinking is to move the implementations for the PIL-based transforms into a separate file, like functional_pil.py
, and have functional.py
implement the dispatch to different functions depending on its input type.
A rough implementation would be
def resize(...):
if isinstance(img, torch.Tensor):
return resize_tensor(img, ...)
elif isinstance(img, Image.Image):
return resize_pil(img, ...)
and have resize_pil
be a @torch.jit.unused
function, so that it doesn't get scripted.
Does this make sense?
@@ -188,36 +188,6 @@ def to_pil_image(pic, mode=None): | |||
return Image.fromarray(npimg, mode=mode) | |||
|
|||
|
|||
def normalize(tensor, mean, std, inplace=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a breaking change, because users of torch.nn.functional.normalize
won't find it anymore.
At least, you should from .functional_tensor import normalize, erase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, Got it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given what I mentioned above, I'm not sure if there is a need to move normalize
and erase
to functional_tensor.py
, because we don't have any implementation based on PIL
anyway.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes but if we want to do something like dispatch from functional -> functional_tensor / functional -> functional_pil, then this might help in ensuring consistency, what do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fmassa, let me know if I am on same page as you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking is that we don't need to move normalize
nor erase
to functional_tensor.py
.
Users shouldn't need to worry about the existence of functional_pil
nor functional_tensor
, just functional
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, got it ! Thanks
Thanks for the PR anyway! |
This is a follow up to #1466.
Since, we have decided to shift the vertical to transforms via pytorch ops. This PR, moves existing such functions from functional.py to functional_tensor.py to ensure a unified setting.