Skip to content

Minor fix antialias arg #6209

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented Jun 28, 2022

Description:

Antialias was defined as None in the public API as we deal with Pillow and Tensor backend. Pillow applies unconditionally the anti-aliasing and there is no way to disable it. Thus for Pillow we keep antialias=None. For Tensors, we can do antialias=True or False, None->False.

The purpose of this PR is to make antialias argument's default value and type hint more explcit in proto's resize_image_tensor signature and in private resize methods. Mainly, for tensors antialias=None does not make much sense.

Copy link
Contributor

@YosuaMichael YosuaMichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @vfdev-5 , LGTM

Comment on lines -440 to -442
if antialias is None:
antialias = False

Copy link
Member

@NicolasHug NicolasHug Jun 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a BC-breaking change for users who explicitly passed antialias=None, or similar patterns.

Considering the relatively low benefit of removing support for None, we might want to keep it to avoid any potential issues?

Copy link
Collaborator Author

@vfdev-5 vfdev-5 Jun 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is private API which is not intended to be used by users. I keep public resize same but just changing private method.

EDIT:
Main purpose is to drop None option from resize_image_tensor:

- antialias: Optional[bool] = None,
+ antialias: bool = False,

If we want to keep private resize API unchanged, I think we can still update resize_image_tensor.
@NicolasHug let me know what you think, please

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is importable without a leading underscore as

from torchvision.transforms.functional_tensor import resize

and as a result, it is likely that most users will consider it public, as this is a widely used convention throughout the pydata ecosystem.

I think it's fine to modify the prototype, but I'd rather keep the stable area untouched to avoid any potential breakage. Especially considering the low benefit, and the fact that these transforms will eventually be superseded by the prototype ones.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we should have added leading underscore to them. It very unfortunate if people today rely on this undocumented API. In this PR #3200 we removed "PRIVATE METHOD" in the docstring.
Previously I already refactored and removed an arg from resize method in functional_pil and functional_tensor #6191 .
Should we restore the arg for BC and warn about new behaviour ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry just realized this is a public function, thanks @NicolasHug for noticing.
I agree that we shouldn't make BC changes in this case.

Looking at the update of @vfdev-5 comment, I think resize_image_tensor is under prototype which should be okay to change explicitly, but lets keep functional_tensor.resize unchanged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have them in private space

Sorry what do you mean? We should treat everything in functional_tensor and functional_pil as public

Copy link
Contributor

@datumbox datumbox Jun 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NicolasHug this API is not public and the decision to do this has happened I believe 1.5 year ago. I am happy to chat more once we are both back.

Cc @fmassa who could provide more details as he made the decision

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found some useful references. The last version that contained warnings that these are private APIs was 0.8: https://github.com/pytorch/vision/blob/release/0.8.0/torchvision/transforms/functional_tensor.py

After that the decision was made to remove the documentation warning, as back then that was the deprecation process followed.

Copy link
Collaborator Author

@vfdev-5 vfdev-5 Jun 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After that the decision was made to remove the documentation warning

Decision was very quick: #3069 (review) and #3200 and here is your issue #3071 to do that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this API is not public

As mentioned in #6191 (comment) there is absolutely nothing that shows that this API is private, unfortunately.

It is strictly the same as what's in transforms.functional. The only difference is the existence of docstrings, which isn't a reasonable frontier marker considering how users can discover APIs (through blogs, through auto-completion, etc).

I do understand that this is unfortunate, but we've had various issues in the past where users had assumed that some APIs where public when we meant them to be private. A recent example is #5836 where we eventually had to reinstate BC through deprecation. This causes more frustration for our users, and more work for us.

We should also note that functional_pil in particular may be used by some users as a compatibility layer for torch and pil - as you know HF has shown interest in this recently. They might not be the only ones.

Is there a very strong reason to break BC here? It looks like maintaining BC could be done at very little cost both in this PR and in #6191.

Copy link
Contributor

@YosuaMichael YosuaMichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As commented by @NicolasHug , lets keep functional_tensor.resize unchanged to prevent BC changes

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So let me put some context for @vfdev-5 and for the people who read the thread to understand better @NicolasHug's concerns at https://github.com/pytorch/vision/pull/6209/files#r908517255.

First of all, we all probably agree that the way functional_* are structured are unfortunate. The APIs have been communicated to be private (see #2547 and #2664) but then after v0.8 there is no clear indication. So to Nicolas' point, how are users supposed to know? There is definitely room for improvement on the deprecation policies we used previously and that's exactly what Nicolas is trying to do now. He is going through all the cases where this is not communicated properly, updates the docs and adds mechanisms in place to let users know.

Concerning the changes on this PR, I don't think the modifications are strictly necessary for the Transforms v2 API (perhaps they can be avoided?). Nevertheless I see a potential issue if we block this change. Either we agree or disagree with policies introduced on old version of TorchVision, we must respect them. The specific API was marked private using the deprecation process followed at that time (around version v0.8). I don't think we should go back and undo this now (aka consider it public using today's definition). This puts a very bad precedence for other parts of the API such as the C++ side of things where everything is considered private or the Beta APIs where there are no BC guarantees. So my concern here is that blocking this PR will open the gates for blocking other important work on private or Beta APIs on the future. My worry is that the next PR might require a modification on this private API and it will be blocked because are now reconsidering it to be public. Finally, please note that this is aligned with our previous practices as several commits have been made over the last 2 years on the specific API which we probably don't consider BC with the proposed definition (see #5677 #3770 #2904 #2952 #2964).

For the above reasons and because the PR is technically correct and on a private API, I will approve it. I will leave it to @vfdev-5 to decide if he wants to merge it or close because it's not critical to his work. Again, I agree with @NicolasHug's concerns here; I'm just equally concerned that if we adopt this very strict policy it's going to come back and bite us on the near future as it will slow down critical planned work. I think more discussions are necessary around these corner-cases once we are all back at the office. The good thing is, we just released TorchVision yesterday which means we got 3+ months until the next one so we can always revert the decision afterwards.

@YosuaMichael YosuaMichael dismissed their stale review June 29, 2022 13:15

I think there is a pro and cons here and I still dont have enough context.

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Jun 14, 2023

Closing as outdated

@vfdev-5 vfdev-5 closed this Jun 14, 2023
@vfdev-5 vfdev-5 deleted the minor-fix-antialias-arg branch June 14, 2023 08:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants