Skip to content

[quant] fix int16 quantization scale in conv weight #74665

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 11 commits into from

Conversation

terrychenism
Copy link
Contributor

@terrychenism terrychenism commented Mar 24, 2022

Stack from ghstack (oldest at bottom):

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D35106497

Summary:
fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 24, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit b3a341c (more details on the Dr. CI page):


  • 4/4 failures introduced in this PR

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / build (1/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-30T21:15:10.0444335Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-30T21:15:10.0441517Z �[36;1mfi�[0m
2022-03-30T21:15:10.0441741Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-30T21:15:10.0442083Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-30T21:15:10.0442391Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-30T21:15:10.0442730Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-30T21:15:10.0443006Z �[36;1m  exit 1�[0m
2022-03-30T21:15:10.0443171Z �[36;1mfi�[0m
2022-03-30T21:15:10.0443390Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-30T21:15:10.0443713Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-30T21:15:10.0444011Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-30T21:15:10.0444335Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-30T21:15:10.0444728Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-30T21:15:10.0445010Z �[36;1m  exit 1�[0m
2022-03-30T21:15:10.0445173Z �[36;1mfi�[0m
2022-03-30T21:15:10.0445357Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-30T21:15:10.0455657Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
2022-03-30T21:15:10.0455871Z env:
2022-03-30T21:15:10.0456014Z   IN_CI: 1
2022-03-30T21:15:10.0456172Z   IS_GHA: 1
2022-03-30T21:15:10.0456389Z   BASE_REVISION: c9b4a1edd9c241f0f9594cd5031f03c03f1d4d8c
2022-03-30T21:15:10.0456795Z   DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7:a2c09c6009bb8a10cbb45a8c5f7c573593289be0

See GitHub Actions build pull / deploy-linux-xenial-cuda11.3-py3.7-gcc7 / build (2/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-30T21:15:15.0436931Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-30T21:15:15.0433716Z �[36;1mfi�[0m
2022-03-30T21:15:15.0433940Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-30T21:15:15.0434275Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-30T21:15:15.0434760Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-30T21:15:15.0435103Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-30T21:15:15.0435382Z �[36;1m  exit 1�[0m
2022-03-30T21:15:15.0435546Z �[36;1mfi�[0m
2022-03-30T21:15:15.0435901Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-30T21:15:15.0436231Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-30T21:15:15.0436604Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-30T21:15:15.0436931Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-30T21:15:15.0437261Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-30T21:15:15.0437551Z �[36;1m  exit 1�[0m
2022-03-30T21:15:15.0437716Z �[36;1mfi�[0m
2022-03-30T21:15:15.0437901Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-30T21:15:15.0448360Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
2022-03-30T21:15:15.0448575Z env:
2022-03-30T21:15:15.0448718Z   IN_CI: 1
2022-03-30T21:15:15.0448877Z   IS_GHA: 1
2022-03-30T21:15:15.0449096Z   BASE_REVISION: c9b4a1edd9c241f0f9594cd5031f03c03f1d4d8c
2022-03-30T21:15:15.0449498Z   DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7:a2c09c6009bb8a10cbb45a8c5f7c573593289be0

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7-bazel-test / build-and-test (3/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-30T21:15:15.6077944Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-30T21:15:15.6074400Z �[36;1mfi�[0m
2022-03-30T21:15:15.6074640Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-30T21:15:15.6075092Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-30T21:15:15.6075421Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-30T21:15:15.6075786Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-30T21:15:15.6076440Z �[36;1m  exit 1�[0m
2022-03-30T21:15:15.6076710Z �[36;1mfi�[0m
2022-03-30T21:15:15.6076946Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-30T21:15:15.6077291Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-30T21:15:15.6077610Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-30T21:15:15.6077944Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-30T21:15:15.6078387Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-30T21:15:15.6078635Z �[36;1m  exit 1�[0m
2022-03-30T21:15:15.6079018Z �[36;1mfi�[0m
2022-03-30T21:15:15.6079344Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-30T21:15:15.6090871Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
2022-03-30T21:15:15.6091206Z env:
2022-03-30T21:15:15.6091365Z   IN_CI: 1
2022-03-30T21:15:15.6091533Z   IS_GHA: 1
2022-03-30T21:15:15.6091726Z   GIT_DEFAULT_BRANCH: master
2022-03-30T21:15:15.6091967Z   BASE_REVISION: c9b4a1edd9c241f0f9594cd5031f03c03f1d4d8c

1 failure not recognized by patterns:

Job Step Action
GitHub Actions pull / pytorch-xla-linux-bionic-py3.7-clang8 / test (xla, 1, 1, linux.2xlarge) Test 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

terrychenism added a commit that referenced this pull request Mar 24, 2022
Summary:
fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: e45e5cf
Pull Request resolved: #74665
@terrychenism terrychenism requested review from jerryzh168 and vkuzo and removed request for jbschlosser and albanD March 24, 2022 05:54
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@vkuzo
Copy link
Contributor

vkuzo commented Mar 24, 2022

fix int16 quantization scale in conv weight

In the PR summary, could we provide some context on what specifically was broken, and how you are fixing it?

data = torch.randn(*input_size, dtype=torch.float)

original_ref_m = RefM()
torch.quantization.engine = "qnnpack"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: not all test environments will support qnnpack. You can check for this with "qnnpack" in torch.testing._internal.common_quantized.supported_qengines, and skip the test if this is false

@@ -98,11 +98,11 @@ def _quantize_weight(
return weight

if weight_qscheme == torch.per_tensor_affine:
if weight_dtype in [torch.quint8, torch.qint8, torch.qint32]:
if weight_dtype in [torch.quint8, torch.qint8, torch.qint32, torch.qint32]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: torch.qint32 already included before for this one

@@ -16,7 +16,7 @@ def _init_weight_qparams(self, weight_qparams, device):
None, torch.per_tensor_affine, torch.per_channel_affine,
torch.per_channel_affine_float_qparams], \
Exception(f"qscheme: {self.weight_qscheme} is not support in reference quantized {self._get_name()}")
if self.weight_dtype in [torch.quint8, torch.qint8, torch.quint4x2]:
if self.weight_dtype in [torch.quint8, torch.qint8, torch.quint4x2, torch.qint32]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add a warning on L37 in the else branch to enable the user to find this easier next time?

Copy link
Contributor

@jerryzh168 jerryzh168 Mar 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add an assert, the else branch should not be executed, it is added just for torchscript

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added 'if-else' back due to one of fx test cases needs torch.float, so it will call 'else'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestQuantizeFx.test_dynamic_with_fusion_multiple_uses

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
terrychenism added a commit that referenced this pull request Mar 24, 2022
Summary:
fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 79b8048
Pull Request resolved: #74665
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good overall, had two inline comments, please address them before landing

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
terrychenism added a commit that referenced this pull request Mar 25, 2022
Summary:
fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 8466547
Pull Request resolved: #74665
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
terrychenism added a commit that referenced this pull request Mar 29, 2022
Summary:
fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 865027e
Pull Request resolved: #74665
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
fix int16 quantization scale in conv weight
before pr, ref_module.conv.get_quantized_weight() in qint32 dtype will get scale = 1.0.
fixed by add qint32 support in set/get weight function for reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D35106497](https://our.internmc.facebook.com/intern/diff/D35106497)

[ghstack-poisoned]
terrychenism added a commit that referenced this pull request Mar 30, 2022
Summary:
fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: c915dd4
Pull Request resolved: #74665
@terrychenism
Copy link
Contributor Author

@terrychenism has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Mar 31, 2022
Summary:
Pull Request resolved: #74665

fix int16 quantization scale in conv weight

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module

Imported from OSS

Reviewed By: mrshenli

Differential Revision: D35106497

fbshipit-source-id: 61030786d20d845ef36ea40cdacdd7dcccf12ae9
@github-actions
Copy link
Contributor

Hey @terrychenism.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@facebook-github-bot facebook-github-bot deleted the gh/terrychenism/24/head branch April 3, 2022 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants