Skip to content

Add a test that compares the output of our quantized models against expected cached values #4502

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
datumbox opened this issue Sep 29, 2021 · 2 comments · Fixed by #4597
Closed
Assignees
Labels
good first issue module: models.quantization Issues related to the quantizable/quantized models module: tests

Comments

@datumbox
Copy link
Contributor

datumbox commented Sep 29, 2021

🚀 The feature

Unlike our test_classification_model tests, the test_quantized_classification_model doesn't check the model against an expected value. This means that if we break the Quantization model, we won't be able to detect it:

vision/test/test_models.py

Lines 676 to 717 in 2e0949e

@pytest.mark.skipif(not ('fbgemm' in torch.backends.quantized.supported_engines and
'qnnpack' in torch.backends.quantized.supported_engines),
reason="This Pytorch Build has not been built with fbgemm and qnnpack")
@pytest.mark.parametrize('model_name', get_available_quantizable_models())
def test_quantized_classification_model(model_name):
defaults = {
'input_shape': (1, 3, 224, 224),
'pretrained': False,
'quantize': True,
}
kwargs = {**defaults, **_model_params.get(model_name, {})}
input_shape = kwargs.pop('input_shape')
# First check if quantize=True provides models that can run with input data
model = torchvision.models.quantization.__dict__[model_name](**kwargs)
x = torch.rand(input_shape)
model(x)
kwargs['quantize'] = False
for eval_mode in [True, False]:
model = torchvision.models.quantization.__dict__[model_name](**kwargs)
if eval_mode:
model.eval()
model.qconfig = torch.quantization.default_qconfig
else:
model.train()
model.qconfig = torch.quantization.default_qat_qconfig
model.fuse_model()
if eval_mode:
torch.quantization.prepare(model, inplace=True)
else:
torch.quantization.prepare_qat(model, inplace=True)
model.eval()
torch.quantization.convert(model, inplace=True)
try:
torch.jit.script(model)
except Exception as e:
tb = traceback.format_exc()
raise AssertionError(f"model cannot be scripted. Traceback = {str(tb)}") from e

We should adapt the tests (add new, modify or reuse existing) to cover for this case.

Motivation, pitch

Switch the following activation from Hardsigmoid to Hardswish and run the tests from mobilenet_v3_large.

kwargs["scale_activation"] = nn.Hardsigmoid

None of the tests will fail but the model will be completely broken. This shows we have a massive hole on our Quantization tests.

cc @pmeier

@datumbox datumbox added good first issue module: models.quantization Issues related to the quantizable/quantized models module: tests labels Sep 29, 2021
@datumbox datumbox changed the title Update our test_quantized_classification_model() test to compare against expected cached values Add a test that compares the output of our quantized models against expected cached values Sep 29, 2021
@jdsgomes
Copy link
Contributor

jdsgomes commented Oct 5, 2021

Hi @datumbox I am interested in taking this issue.

@datumbox
Copy link
Contributor Author

datumbox commented Oct 5, 2021

@jdsgomes Awesome, I assigned it to you to ensure nobody else picks it up. Happy to chat offline about the details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue module: models.quantization Issues related to the quantizable/quantized models module: tests
Projects
None yet
2 participants