Skip to content

Refactored and modified private api for resize functional op #6191

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 23, 2022

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented Jun 22, 2022

Description:

  • Refactored output size computation into a single method _compute_output_size and removed max_size arg from F_t.resize and F_pil.resize.
    • We can also reuse _compute_output_size in proto for bboxes (right now max_size is not taken into account)
  • Minor changes and cosmetics in tests
  • Temporarily I also have to modify prototype functional implementation resize_image_*. This will be fixed in [proto] Improvements for functional API and tests #6187

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @vfdev-5. I have a few questions for you below. Let me know what you think.

@@ -412,19 +410,15 @@ def test_resize_scripted(self, dt, size, max_size, interpolation, device):
# This is a trivial cast to float of uint8 data to test all cases
tensor = tensor.to(dt)
if max_size is not None and len(size) != 1:
pytest.xfail("with max_size, size must be a sequence with 2 elements")
pytest.skip("Size should be an int or a sequence of length 1 if max_size is specified")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Why skip instead of xfail?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question :)

According to https://docs.pytest.org/en/stable/how-to/skipping.html
We skip in case windows tests on non-windows platform and xfail if we expect a test to fail for some reason.

In our case, the configuration "max_size is not None and len(size) != 1" can not be tested as we explicitly raise and error. Proper solution to that is to catch the error and check the message text with pytest.raises.

I felt like skipping is more appropriate than xfail. But it is a matter of taste if you think it is better to revert to xfail I can do that and will only fix the message which is incorrect

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was expecting a pytest.raises where if I'm honest but no strong opinions.

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, our offline discussion covers all my points. I'll let @vfdev-5 summarize what we discussed.

@vfdev-5 vfdev-5 merged commit aeafa91 into pytorch:main Jun 23, 2022
@vfdev-5 vfdev-5 deleted the refactor-resize-max-size branch June 23, 2022 10:13
facebook-github-bot pushed a commit that referenced this pull request Jun 27, 2022
…#6191)

Summary:
* Refactored and modified private api for resize functional op

* Fixed failures

* More updates

* Fixed flake8

Reviewed By: NicolasHug

Differential Revision: D37450358

fbshipit-source-id: 173d75ecb293eb418e9e7df9ff9eb152dd17f4ac
@vfdev-5 vfdev-5 mentioned this pull request Jun 28, 2022
@NicolasHug
Copy link
Member

NicolasHug commented Jun 28, 2022

@vfdev-5 @datumbox

@vfdev-5 pointed me to this PR from our discussions in #6209 (comment)

As discussed over there, functional_pil and functional_tensor are technically public, unfortunately. I understand some of these changes are useful for the prototype area (where breakages are OK), but would it be possible to preserve BC here? It looks like the removal of the parameters isn't strictly necessary.
For example, perhaps it's possible to call _compute_output_size within functional_pil.resize and functional_tensor.resize instead of calling it in functional.resize?

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Jun 28, 2022

For example, perhaps it's possible to call _compute_output_size within functional_pil.resize and functional_tensor.resize instead of calling it in functional.resize?

It's possible if we introduce additional _utils.py as a common place for functional_pil.py and functional_tensor.py

@NicolasHug
Copy link
Member

I guess we could just define it in functional_tensor and have functional_pil rely on it?

@datumbox
Copy link
Contributor

@NicolasHug as far as I know those APIs are private and this change has been done many versions ago. I don't think we should worry about BC here.

@NicolasHug
Copy link
Member

this change has been done many versions ago

I'm not sure I understand. Wasn't this PR merged 5 days ago?

those APIs are private

Unfortunately there is absolutely nothing that indicates that they are private.

@datumbox
Copy link
Contributor

Sorry I was referring to when the API became private not to this PR. Let's continue our discussion at #6209 (comment) to avoid further confusion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants