Skip to content

Commit 0a9cd6d

Browse files
datumboxfacebook-github-bot
authored andcommitted
Removes unnecessary no_pretrained_model from test_quantize_fx.py (#67836)
Summary: TorchVision accidentally included model builders for quantized models without weights; this was an old bug. These builders were largely unusable and caused issues to the users. Commonly they were filtered out to avoid causing issues. We've recently fixed that (pytorch/vision#4854) by either removing those unnecessary builders or by providing quantized weights. This PR removes the no-longer necessary filtering of the methods. **It should be merged after TorchVision is synced on FBCode.** Pull Request resolved: #67836 Reviewed By: jerryzh168 Differential Revision: D32230658 Pulled By: datumbox fbshipit-source-id: 01cd425b1bda3b4591a25840593b3b5dde3a0f12
1 parent f9422e1 commit 0a9cd6d

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

test/quantization/fx/test_quantize_fx.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5605,11 +5605,9 @@ def get_available_classification_models(models):
56055605
model_list = get_available_classification_models(models)
56065606
quantized_model_list = get_available_classification_models(quantized_models)
56075607

5608-
no_pretrained_model = set(['shufflenet_v2_x0_5', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0'])
5609-
quantized_model_list = set(quantized_model_list) - no_pretrained_model
5608+
quantized_model_list = set(quantized_model_list)
56105609
# test eager and graph consistency
56115610
model_list = quantized_model_list
5612-
model_list = set(model_list)
56135611
# mobilenet/inception_v3/googlenet qat is not working due to AdaptiveAveragePool qat
56145612
# we might observe the output of AdaptiveAveragePool in the future
56155613
# and re-enable the test

0 commit comments

Comments
 (0)