Skip to content

Updated fill arg typehint for affine, perspective and elastic ops #6595

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Sep 19, 2022

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented Sep 16, 2022

Follow-up to #6594

@vfdev-5 vfdev-5 force-pushed the enh-fill-typehint-2 branch from e54babc to a36da7d Compare September 16, 2022 11:26
@vfdev-5 vfdev-5 marked this pull request as ready for review September 16, 2022 11:27
Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good. A few questions/suggestions but nothing major. Let me know your thoughts.

@@ -591,7 +590,23 @@ def rotate(
def pad_image_tensor(
img: torch.Tensor,
padding: Union[int, List[int]],
fill: Optional[Union[int, float]] = 0,
fill: Optional[Union[int, float, List[float]]] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this kernel is private on stable and I agree with the change to ensure consistency. Any BC breakages that can creep upwards due to the change on the default value?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pad_image_tensor is low-level new API so I think there is no BC to keep. In any case fill=None will be transformed to fill=0 inside

if fill is None:
fill = 0

Comment on lines +596 to +601
if fill is None:
# This is a JIT workaround
return _pad_with_scalar_fill(img, padding, fill=None, padding_mode=padding_mode)
elif isinstance(fill, (int, float)) or len(fill) == 1:
fill_number = fill[0] if isinstance(fill, list) else fill
return _pad_with_scalar_fill(img, padding, fill=fill_number, padding_mode=padding_mode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wonder if we could do something like:

Suggested change
if fill is None:
# This is a JIT workaround
return _pad_with_scalar_fill(img, padding, fill=None, padding_mode=padding_mode)
elif isinstance(fill, (int, float)) or len(fill) == 1:
fill_number = fill[0] if isinstance(fill, list) else fill
return _pad_with_scalar_fill(img, padding, fill=fill_number, padding_mode=padding_mode)
if fill is None or isinstance(fill, (int, float)) or len(fill) == 1:
if fill is None:
fill = 0
fill_number = fill[0] if isinstance(fill, list) else fill
return _pad_with_scalar_fill(img, padding, fill=fill_number, padding_mode=padding_mode)

To avoid the duplicate dispatch to scalar. I assume that fill-None ends up becoming 0?

Copy link
Collaborator Author

@vfdev-5 vfdev-5 Sep 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functional tensor is already doing

if fill is None:
fill = 0

Copy link
Collaborator Author

@vfdev-5 vfdev-5 Sep 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that below code is valid, but JIT does not agree

    if fill is None or isinstance(fill, (int, float)) or len(fill) == 1:
        fill_number = fill[0] if isinstance(fill, list) else fill

def _pad_with_scalar_fill(
img: torch.Tensor,
padding: Union[int, List[int]],
fill: Optional[Union[int, float]] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If what I proposed above makes sense, you can avoid the optional here:

Suggested change
fill: Optional[Union[int, float]] = None,
fill: Union[int, float],

At any case I don't think we should pass a default value here None. We should always receive it as a positional arg, similar to what we do for _pad_with_vector_fill

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can change it to fill: Union[int, float, None] and remove default value.

@datumbox
Copy link
Contributor

@vfdev-5 The work of making F JIT-scriptable (see #6584) depends on this PR. We should consider replacing Sequences for Lists and updating the _convert_fill_arg() to be JIT scriptable.

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing stands out, so LGTM if CI is green.

@vfdev-5 vfdev-5 merged commit 2718f73 into pytorch:main Sep 19, 2022
@vfdev-5 vfdev-5 deleted the enh-fill-typehint-2 branch September 19, 2022 09:40
facebook-github-bot pushed a commit that referenced this pull request Sep 23, 2022
…c ops (#6595)

Summary:
* Updated fill arg typehint for affine, perspective and elastic ops

* Updated pad op on prototype side

* Code updates

* Few other minor updates

Reviewed By: NicolasHug

Differential Revision: D39765316

fbshipit-source-id: 926c111135588bbe2193626c0a951b3de127dbfc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants