-
Notifications
You must be signed in to change notification settings - Fork 633
NXP backend: Add support for depthwise and separable convolution. #11215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NXP backend: Add support for depthwise and separable convolution. #11215
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11215
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 2 Cancelled JobsAs of commit be19da6 with merge base 75e4044 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot label "module: nxp" "release notes: nxp" |
assert len(nodes) == 11 | ||
assert ( | ||
nodes[7].target.__name__ == "aten.convolution.default" | ||
) # Convolution not delegated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love these tests.
Just skimmed, let me take a close look in a day or so, esp |
8e64288
to
cb91593
Compare
Ready for review. Most of the tests passing. The failing seems unrelated to the changes:
|
digant is out for a while I will try to find another person to review these |
Converting to draft unless the NXP Backend CI is back (#11756) |
I am back, I can review this coming week. |
This is yet a draft, as we had problems with the eIQ PyPI repository. For some reason the eIQ PyPI and the executorch self-hosted runners does not talk to each other [https://github.com//pull/11612]. Let have initially resolve that one here #11756. Then we will mark the PR as ready for review and add you (and Digant for refefence) as reviewers. |
07f79c2
to
774b8ef
Compare
@robert-kalmar is this ready now? |
It is ready for review and merging now. |
return False # Unable to split group Conv into separated convolutions because out_channels % group != 0. | ||
|
||
# 10 is an empirical value. The `group` directly dictates how many branches will be created. | ||
return 2 <= group <= 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why limit the number of groups, is this a HW limit?
Is there any logging in this node converters? This will be relatively opaque if it fails since we return False for multiple different reasons
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the Neutron supports only regular convolution (groups ==1) and the Depthwise convolution (groups == # channels).
The compute of the general group convolution can be performed by splitting it into multiple regular convolutions, based on groups. However, in the current code base we decided to not support the generic group convolution at all - https://github.com/pytorch/executorch/pull/11215/files#diff-a5d5ec3b84b15dbe487d027b8a993cc6bb5795aef06e3589150bb2195c35b1b1R73 , only regular and depthwise cases are allowed for delegation.
AI: I will add more descriptive comment, instead of just "not supported"
The limitation here is rather an empiric value to not allow more that 10 convolutions, as it is harder to debug. And it is more as a code for future enablement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
774b8ef
to
9812a45
Compare
The `group` attribute of the `aten.convolution` has an effect on how the weights are used in the computation of the convolution. To properly utilize Neutron, the convolution is sometimes converted to `DepthwiseConv2D`.
Neutron only supports batch == 1.
9812a45
to
be19da6
Compare
# Not supported natively by the eIQ Neutron so Group Separable Convolution. | ||
# In practice it can be computed by splitting the Group Separable Convolution into multiple Pointwise | ||
# Convo it will use the Split and Concat operation. The Concat operation in Neutron Converter | ||
# SDK 25.03 requires the # of channels to be multipy of # of MAC units in the eIQ Neutron. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# SDK 25.03 requires the # of channels to be multipy of # of MAC units in the eIQ Neutron. | |
# SDK 25.03 requires the # of channels to be multiple of # of MAC units in the eIQ Neutron. |
Summary
Adding depthwise convolution support for the NXP backend
Test plan
Functionality tested by python unit tests:
cc @digantdesai @JakeStevens , @Pop-korn