-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Make prototype F
JIT-scriptable
#6584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some early notes:
@@ -32,6 +32,31 @@ def from_pil_mode(cls, mode: str) -> ColorSpace: | |||
else: | |||
return cls.OTHER | |||
|
|||
@staticmethod | |||
def from_tensor_shape(shape: List[int]) -> ColorSpace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this makes more sense to be on the Enum than the method. At any case it was moved because JIT didn't like it to be a class method. We don't have to keep it in the enum, we just need to have it as a standalone method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we can't even have classmethods on objects that are not covered by JIT? 🤯
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plus, I intentionally put the term "guess" in the name, since the number of channels is not sufficient to pick the right colorspace. For example, CMYK also has 4 channels, but would be classified as RGBA. However, this is not a problem now since we don't support it yet and maybe never will.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the issue is that the specific class is definitely not JIT-scriptable. JIT complains about the tensor_contents: Any = None
value on the __repr__
. Perhaps we can remove this?
I'm happy to make any changes on the name. OR try a different approach with JIT. Let me wrap up the rest of the kernels to see where we are and we can try a couple of options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it works as is, I wouldn't mess with this class any further. If it is needed we can remove the : Any
though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pmeier I'm also really flexible. I don't have context why the tensor_contents
is introduced and what's supposed to be. Would you like to send a PR once this is merged to see if you can make it work in the original location?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few comments inline. Otherwise LGTM if CI is green.
@@ -32,6 +32,31 @@ def from_pil_mode(cls, mode: str) -> ColorSpace: | |||
else: | |||
return cls.OTHER | |||
|
|||
@staticmethod | |||
def from_tensor_shape(shape: List[int]) -> ColorSpace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we can't even have classmethods on objects that are not covered by JIT? 🤯
@@ -32,6 +32,31 @@ def from_pil_mode(cls, mode: str) -> ColorSpace: | |||
else: | |||
return cls.OTHER | |||
|
|||
@staticmethod | |||
def from_tensor_shape(shape: List[int]) -> ColorSpace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plus, I intentionally put the term "guess" in the name, since the number of channels is not sufficient to pick the right colorspace. For example, CMYK also has 4 channels, but would be classified as RGBA. However, this is not a problem now since we don't support it yet and maybe never will.
ef7130c
to
60000fa
Compare
aafb356
to
1e301b1
Compare
F
JIT-scriptableF
JIT-scriptable
@datumbox I let you review your PR once you are back :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vfdev-5 I might not be the best person to "review" this PR but I can confirm that any modification done on your side looks great. It was a good idea to do most of the work needed on fill
on separate PRs. I had a look also at your PRs #6595 and #6599 and the direction looks great. Maintaining F
JIT-scriptable is a major win, thanks a lot for your work on making it possible.
@pmeier Could you please give us an independent review on this PR?
FYI: follow up work we should do is ensuring the signatures of F
and _Feature
are aligned. After merging this PR, the signatures of the dispatchers on the base class don't match (one uses Sequences and the other uses List).
@@ -72,11 +72,10 @@ def _apply_image_transform( | |||
|
|||
# Fill = 0 is not equivalent to None, https://github.com/pytorch/vision/issues/6517 | |||
# So, we have to put fill as None if fill == 0 | |||
fill_: Optional[Union[int, float, Sequence[int], Sequence[float]]] | |||
# This is due to BC with stable API which has fill = None by default | |||
fill_ = F._geometry._convert_fill_arg(fill) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to consider making _convert_fill_arg()
public on the future as this is regularly used in Transforms as a utility method.
@datumbox should they match ? Image.pad(..., fill=(1, 2, 3))
# instead of only List[float]
Image.pad(..., fill=[1.0, 2.0, 3.0]) Do we want that ? If yes, I can update this PR and put all |
@vfdev-5 One of the original proposals was that the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Enum<__torch__.torchvision.transforms.functional.InterpolationMode> interpolation=Enum<InterpolationMode.BILINEAR>, | ||
# Union(float[], float, int, NoneType) fill=None) -> Tensor | ||
# | ||
# This is probably due to the fact that F.perspective does not have the same signature as F.perspective_image_tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if the signatures actually diverge, then this is the issue? I'm ok disabling this here for now and look at it later.
Still, is this something we want? Shouldn't the public kernels be in sync with the dispatcher?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe @vfdev-5 made this change. As discussed previously there are other places where we need to align the signatures and this can happen on a follow up PR to avoid making this too long.
Personally I think it's worth aligning the signatures unless there is a good reason not to (perhaps the interpolation default value is one exception?). I'm open to discussing this and I think we should agree on the policy soon.
@@ -9,7 +9,6 @@ | |||
|
|||
|
|||
class TestCommon: | |||
@pytest.mark.xfail(reason="dispatchers are currently not scriptable") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
def test_scriptable_midlevel(midlevel): | ||
jit.script(midlevel) # TODO: pass data through it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vfdev-5 Isn't that superseded by the tests I've added?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe? I noticed that at one point early in the development, one of the kernels had a return type Any
and this test didn't complaint. This is why usually to be safe, we take the strategy of:
- JIT-scripting
- Passing data through the kernel and compare it with the non-JIT version
- Serialize/deserialize the method and confirm it still returns the right value.
See _check_jit_scriptable()
from test_models.py for more info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@datumbox in test/test_prototype_transforms_dispatchers.py the test should script and execute midlevel op, so I think we can remove this test_scriptable_midlevel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll cleanup.
@@ -32,6 +32,31 @@ def from_pil_mode(cls, mode: str) -> ColorSpace: | |||
else: | |||
return cls.OTHER | |||
|
|||
@staticmethod | |||
def from_tensor_shape(shape: List[int]) -> ColorSpace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it works as is, I wouldn't mess with this class any further. If it is needed we can remove the : Any
though.
@@ -252,7 +252,14 @@ def __init__( | |||
|
|||
def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any: | |||
fill = self.fill[type(inpt)] | |||
return F.pad(inpt, padding=self.padding, fill=fill, padding_mode=self.padding_mode) | |||
|
|||
# This cast does Sequence[int] -> List[int] and is required to make mypy happy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, mypy
is not at fault here. The kernel could support Sequence[int]
, but we annotated List[int]
. Since that is more specific, mypy
is right to complain. We can always silence mypy
with a # type: ignore[...]
comment or use cast(...)
since both have no runtime implications. Actually casting to a list while it is not needed hurts performance although probably not by a significant amount.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point where we try to improve the speed by any margin, I would prefer using ignore statements rather than casting. I don't know how negligible the introduction of cast()
is but if there is no measurable slow-down I don't mind switching.
Summary: * Improve existing low kernel test. * Add new midlevel jit-scriptability test (failing). * Remove duplicate aliases from kernel tests. * Fixing colour kernels. * Fixing deprecated kernels. * fix mypy * Silence mypy instead of fixing to avoid performance penalty * Fixing augment kernels. * Fixing augment meta. * Remove is_tracing calls. * Add fake ImageType and DType * Fixing type conversion kernels. * Fixing misc kernels. * partial fix geometry * Remove mutable default from `_pad_with_vector_fill()` + all other unnecessary defaults. * Fix geometry ops * Fixing tests * Removed xfail for jit tests on midlevel ops Reviewed By: NicolasHug Differential Revision: D39765297 fbshipit-source-id: 50ec9dc9d9e2f9c8dab6ab01337e01643dc0ab64 Co-authored-by: vfdev-5 <[email protected]>
Fixes #6553
co-authored with @vfdev-5