Add a test that compares the output of our quantized models against expected cached values #4502
Labels
good first issue
module: models.quantization
Issues related to the quantizable/quantized models
module: tests
🚀 The feature
Unlike our
test_classification_model
tests, thetest_quantized_classification_model
doesn't check the model against an expected value. This means that if we break the Quantization model, we won't be able to detect it:vision/test/test_models.py
Lines 676 to 717 in 2e0949e
We should adapt the tests (add new, modify or reuse existing) to cover for this case.
Motivation, pitch
Switch the following activation from
Hardsigmoid
toHardswish
and run the tests from mobilenet_v3_large.vision/torchvision/models/quantization/mobilenetv3.py
Line 24 in ff126ae
None of the tests will fail but the model will be completely broken. This shows we have a massive hole on our Quantization tests.
cc @pmeier
The text was updated successfully, but these errors were encountered: