Skip to content

[quant][graphmode] Fix quantized::conv2d patterns in QuantFusion #26515

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Sep 20, 2019

Stack from ghstack:

Summary:
Fix patterns of prepack and permute after recent changes
to quantized::conv2d and quantized::conv2d_prepack

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
pt1quant

Subscribers:

Tasks:

Tags:

Differential Revision: D17502573

Summary:
Fix patterns of `prepack` and `permute` after recent changes
to `quantized::conv2d` and `quantized::conv2d_prepack`

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
pt1quant

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@jerryzh168 jerryzh168 requested a review from apaszke as a code owner September 20, 2019 00:42
@pytorchbot pytorchbot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Sep 20, 2019
jerryzh168 added a commit that referenced this pull request Sep 20, 2019
Summary:
Fix patterns of `prepack` and `permute` after recent changes
to `quantized::conv2d` and `quantized::conv2d_prepack`

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Reviewers:
pt1quant

Subscribers:

Tasks:

Tags:

ghstack-source-id: b11d6b7
Pull Request resolved: #26515
Copy link

@ZolotukhinM ZolotukhinM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but please include a description of the recent changes (or a link to the corresponding PR) you are referring to in the commit message.

%a_perm : Tensor = aten::permute(%a_quant, %in_param)
%w_perm : Tensor = aten::permute(%w_quant, %in_param)
%w_packed = quantized::conv_prepack(%w_perm, %stride, %padding, %dilation, %groups)
%r = quantized::conv2d(%a_perm, %w_packed, %b_quant, %stride, %padding, %dilation, %groups, %r_scale, %r_zero_point)
%out_param : int[] = prim::ListConstruct(%0, %3, %1, %2)
%r_perm = aten::permute(%r, %out_param)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need a permute for results?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, it's a TODO in qconv.py: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/qconv.cpp#L375
we need to update after that is removed

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 4444b91.

mingbowan pushed a commit to mingbowan/pytorch that referenced this pull request Sep 23, 2019
Summary:
Pull Request resolved: pytorch#26515

Fix patterns of `prepack` and `permute` after recent changes
to `quantized::conv2d` and `quantized::conv2d_prepack`

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17502573

fbshipit-source-id: 1a719fd610e8ea9dc16075abaa042556e1edbceb
@facebook-github-bot facebook-github-bot deleted the gh/jerryzh168/79/head branch October 28, 2019 22:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants