Skip to content

Add typing annotations to models/quantization #4232

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Aug 31, 2021

Conversation

oke-aditya
Copy link
Contributor

Following up on #2025, this PR adds missing typing annotations in models/quantization.

Any feedback is welcome!

Copy link
Contributor Author

@oke-aditya oke-aditya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a few doubts. I have posted them
Let me know! I will change accordingly

This needs a review @pmeier @frgfm

Edit: CI is all green (:crossed_fingers: )

Copy link
Contributor

@frgfm frgfm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!
Though, I have to say: have you run mypy over this to check if this works? perhaps focus on part where you are almost positive about the typing, otherwise it takes a really long time for reviewers to go over your work 😅

My review isn't finished, but I'm already getting the feeling that this should be split into different PRs considering the volume of modifications. But @pmeier is the authority on this matter :)

Copy link
Contributor Author

@oke-aditya oke-aditya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can someone again have a look? cc @frgfm @pmeier

@frgfm frgfm mentioned this pull request Jul 31, 2021
Copy link
Contributor Author

@oke-aditya oke-aditya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice if someone can review! @fgfm or @fmassa.

I'm not sure what to do of the init error log posted over build by mypy.
At the moment I think Either we would need to rewrite the init or ignore.

Let me know your thoughts 😃

@@ -1,11 +1,15 @@
import torch
import torch.nn as nn
from torch import Tensor
from typing import Any

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to do a slight rewrite to make mypy happy. Let me know if this is fine.

@oke-aditya
Copy link
Contributor Author

oke-aditya commented Aug 5, 2021

Here are the errors and what are being suppressed.

torchvision/models/quantization/utils.py:31: error: Incompatible types in assignment (expression has type "QConfig", variable has type "Union[Tensor, Module]")  [assignment]
            model.qconfig = torch.quantization.QConfig(
                            ^
torchvision/models/quantization/utils.py:35: error: "Tensor" not callable  [operator]
model.fuse_model()
        ^

The cat cat errors

torchvision/models/quantization/shufflenetv2.py:33: error: Argument 1 to "cat" of "FloatFunctional" has incompatible type "Tuple[Tensor, Any]"; expected "List[Tensor]"  [arg-type]
                out = self.cat.cat((x1, self.branch2(x2)), dim=1)
                                    ^
torchvision/models/quantization/shufflenetv2.py:35: error: Argument 1 to "cat" of "FloatFunctional" has incompatible type "Tuple[Any, Any]"; expected "List[Tensor]"  [arg-type]
                out = self.cat.cat((self.branch1(x), self.branch2(x)), dim=1)
                                    ^

The init errors, this probably has a nice workaround

torchvision/models/quantization/inception.py:111: error: "__init__" of "InceptionA" gets multiple values for keyword argument "conv_block"  [misc]
            super(QuantizableInceptionA, self).__init__(conv_block=QuantizableBasicConv2d, *args, **kwargs)
            ^

torchvision/models/quantization/inception.py:184: error: "__init__" of "InceptionAux" gets multiple values for keyword argument "conv_block"  [misc]
            super(QuantizableInceptionAux, self).__init__(conv_block=QuantizableBasicConv2d, *args, **kwargs)
            ^

Assigning a layer of model to None error

torchvision/models/quantization/googlenet.py:81: error: Incompatible types in assignment (expression has type "None", variable has type Module)  [assignment]
                model.aux1 = None
                             ^

For now I have skipped the errors (except init error) by specifying return type

@frgfm can you have a look ?
Once we solve the above issues (either telling mypy to ignore or fix it) The PR would be good to go

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @oke-aditya and sorry for the delay. Apart from your questions I've added two more comments inline.

torchvision/models/quantization/utils.py:31: error: Incompatible types in assignment (expression has type "QConfig", variable has type "Union[Tensor, Module]")  [assignment]
            model.qconfig = torch.quantization.QConfig(
                            ^

The ignore is justified, because we are adding an attribute to a live object and mypy can't deal with that.

torchvision/models/quantization/utils.py:35: error: "Tensor" not callable  [operator]
model.fuse_model()
        ^

mypy complains, because it correctly can't find a callable fuse_model parameter on an nn.Module. Thus, we need make the input type annotation more precise. I'm guessing we only accepting the quantized models that we have built-in? If yes, we could create a custom class mixin (for example QuantizedModuleMixin) that also inherits from nn.Module and adds this as abstract method. All our models could inherit from making fuse_model also available for checking.

The cat cat errors

torchvision/models/quantization/shufflenetv2.py:33: error: Argument 1 to "cat" of "FloatFunctional" has incompatible type "Tuple[Tensor, Any]"; expected "List[Tensor]"  [arg-type]
                out = self.cat.cat((x1, self.branch2(x2)), dim=1)
                                    ^
torchvision/models/quantization/shufflenetv2.py:35: error: Argument 1 to "cat" of "FloatFunctional" has incompatible type "Tuple[Any, Any]"; expected "List[Tensor]"  [arg-type]
                out = self.cat.cat((self.branch1(x), self.branch2(x)), dim=1)
                                    ^

There are actually two errors here:

  1. We are passing a tuple when a list is expected. We should be able to replace the parentheses with brackets: (x1, self.branch2(x2)) to [x1, self.branch2(x2)].
  2. self.branch*() has no proper return type, so we need to cast ourselves: self.branch2(x2) to cast(torch.Tensor, self.branch2(x2))

The init errors, this probably has a nice workaround

torchvision/models/quantization/inception.py:111: error: "__init__" of "InceptionA" gets multiple values for keyword argument "conv_block"  [misc]
            super(QuantizableInceptionA, self).__init__(conv_block=QuantizableBasicConv2d, *args, **kwargs)
            ^

torchvision/models/quantization/inception.py:184: error: "__init__" of "InceptionAux" gets multiple values for keyword argument "conv_block"  [misc]
            super(QuantizableInceptionAux, self).__init__(conv_block=QuantizableBasicConv2d, *args, **kwargs)
            ^

I don't think so, because we currently make the hard assumption that no one passes conv_block as keyword argument. If someone would do, we would get an error at runtime, because we also pass it.

If we insist on using **kwargs here, I'm afraid there is no clean way to do it. One workaround would be to add a conv_block: None = None before the **kwargs and bail out if we encounter anything else than None. cc @fmassa @datumbox

Note that there are a lot more of these in the CI failure.

Assigning a layer of model to None error

torchvision/models/quantization/googlenet.py:81: error: Incompatible types in assignment (expression has type "None", variable has type Module)  [assignment]
                model.aux1 = None
                             ^

Porbably fine, because we do the same thing in the regular GoogLeNet.

model.aux1 = None # type: ignore[assignment]
model.aux2 = None # type: ignore[assignment]

Not sure if this is a jit issue, but maybe we can type them as Optional[nn.Module] in the first place?

@oke-aditya
Copy link
Contributor Author

Hi Phillip!

Thanks a lot for detailed analysis! Such work teaches a lot to a newbie developer like me. 🙏
For now I'm fixing up the annotations by adding quotes.

As you said there isn't a very neat workaround for the init errors, so probably we can take that in a new PR after some discussion.

Hence, I think that we can suppress the bunch of init errors we are getting in CI

Let me know further steps 😄

@pmeier
Copy link
Collaborator

pmeier commented Aug 17, 2021

As you said there isn't a very neat workaround for the init errors, so probably we can take that in a new PR after some discussion.

Hence, I think that we can suppress the bunch of init errors we are getting in CI

That is fair in the scope of this PR. Can you open an issue afterwards so we can track this? Maybe also add a quick # TODO comment in the code linking to my comment above.

The same (ignore + issue) holds for the fuse_model() part detailed above.

Let me know further steps 😄

The cat error seems to be fixable without larger refactorings. Could you try and see if my suggestion passes CI? After that the PR is good to go!

@oke-aditya oke-aditya requested a review from pmeier August 17, 2021 19:17
@oke-aditya
Copy link
Contributor Author

oke-aditya commented Aug 17, 2021

Hi @pmeier and @datumbox
The cat.cat issue is fixed by using a List instead of Tuple. And the remaining init errors are silenced with a #TODO.

So I think this is good to go! I will open the issue as per Phillip's comments once merged.

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks a lot @oke-aditya! I've cleaned up the #TODO comments to always be directly over the def __init__ line. Plus, I've added one the .fuse_model() call.

@pmeier pmeier requested a review from datumbox August 18, 2021 05:24
@oke-aditya
Copy link
Contributor Author

Just pinging for visibility @datumbox (Probably he was busy in other PRs 😃)

@pmeier
Copy link
Collaborator

pmeier commented Aug 20, 2021

Hey @oke-aditya, Vasilis' review will be delayed until next week. Unfortunately, besides me no one has time to review now, so we'll have to wait. Sorry for that.

@oke-aditya
Copy link
Contributor Author

No problem Philip ! 🙂

@oke-aditya
Copy link
Contributor Author

oke-aditya commented Aug 28, 2021

Hmm unsure if it is appropriate. But just re-tagging @datumbox

Edit: Sorry, I pinged on Weekend. :(

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oke-aditya Apologies for the late response. The addition of EfficientNets took most of my time last week. I think we are good to merge this one. I'll wait for the green tests.

Concerning the valid remarks that @pmeier raised for the conv_block, I believe we can consider this internal only for now. There some new developments on quantization so we might revisit how we do it.

@datumbox datumbox merged commit 0725ccc into pytorch:main Aug 31, 2021
@oke-aditya oke-aditya deleted the add_typing3 branch August 31, 2021 10:00
facebook-github-bot pushed a commit that referenced this pull request Sep 9, 2021
Summary:
* fix

* add typings

* fixup some more types

* Type more

* remove mypy ignore

* add missing typings

* fix a few mypy errors

* fix mypy errors

* fix mypy

* ignore types

* fixup annotation

* fix remaining types

* cleanup #TODO comments

Reviewed By: fmassa

Differential Revision: D30793343

fbshipit-source-id: 0448f6f24f406827abc9e1825489c786b6f0eb11

Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>
@NicolasHug NicolasHug added code quality module: models.quantization Issues related to the quantizable/quantized models labels Sep 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed code quality module: models.quantization Issues related to the quantizable/quantized models
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants