Skip to content

[reland] Unify Quantization APIs for add, pool and relu #26586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

supriyar
Copy link
Contributor

@supriyar supriyar commented Sep 21, 2019

Stack from ghstack:

Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

Differential Revision: D17515129

Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

[ghstack-poisoned]
@pytorchbot pytorchbot added module: internals Related to internal abstractions in c10 and ATen module: operators oncall: quantization Quantization support in PyTorch labels Sep 21, 2019
@supriyar supriyar requested a review from jerryzh168 September 21, 2019 01:21
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

Differential Revision: [D17515129](https://our.internmc.facebook.com/intern/diff/D17515129)

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Sep 21, 2019
Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

ghstack-source-id: 1a51a86
Pull Request resolved: #26586
Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

Differential Revision: [D17515129](https://our.internmc.facebook.com/intern/diff/D17515129)

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Sep 21, 2019
Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

ghstack-source-id: aa6929a
Pull Request resolved: #26586
@supriyar
Copy link
Contributor Author

@pytorchbot retest this please

1 similar comment
@supriyar
Copy link
Contributor Author

@pytorchbot retest this please

Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

Differential Revision: [D17515129](https://our.internmc.facebook.com/intern/diff/D17515129)

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Sep 21, 2019
Summary:

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan:
python test/test_quantized.py TestQNNPACKOps

ghstack-source-id: 763b69d
Pull Request resolved: #26586
zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 21, 2019
Summary:
Pull Request resolved: pytorch/pytorch#26586

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan: python test/test_quantized.py TestQNNPACKOps

Differential Revision: D17515129

Pulled By: supriyar

fbshipit-source-id: 951e90205aa19581ea006a91d9514fc7a94409ef
@facebook-github-bot
Copy link
Contributor

@supriyar merged this pull request in 99226cd.

mingbowan pushed a commit to mingbowan/pytorch that referenced this pull request Sep 23, 2019
Summary:
Pull Request resolved: pytorch#26586

Use the backend engine flag to call QNNPACK for quantized ops.

Test Plan: python test/test_quantized.py TestQNNPACKOps

Differential Revision: D17515129

Pulled By: supriyar

fbshipit-source-id: 951e90205aa19581ea006a91d9514fc7a94409ef
@facebook-github-bot facebook-github-bot deleted the gh/supriyar/19/head branch October 28, 2019 22:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: internals Related to internal abstractions in c10 and ATen oncall: quantization Quantization support in PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants