-
Notifications
You must be signed in to change notification settings - Fork 675
Closed
Labels
backend testerThis bug was found by the backend test suite.This bug was found by the backend test suite.module: qnnIssues related to Qualcomm's QNN delegate and code under backends/qualcomm/Issues related to Qualcomm's QNN delegate and code under backends/qualcomm/partner: qualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, QualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm
Description
🐛 Describe the bug
The vit_b_16 model from torchvision does not successfully lower on the QNN backend. It fails with an error -"aten_view_copy_default_2" generated: could not create op.
Output excerpt:
INFO:executorch.backends.qualcomm.qnn_preprocess:Visiting: aten_linear_default_48, aten.linear.default
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:219::ERROR:could not create op: q::ConvLayer.fp16.s1.tcm
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:221::ERROR:Op creation failure, op id=0x40f3900000036 (q::ConvLayer.fp16.s1.tcm) total_inputs=5
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:207: Input 0: id=[0x3c8f00000002f] op=[[email protected]] output0=[14ConcreteTensorIN5Tdefs14F16Crouton_TCMEE]
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:207: Input 1: id=[0x4137900000036] op=[[email protected].] output0=[14ConcreteTensorIN5Tdefs16pkWeightsF16_TCMEE]
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:207: Input 2: id=[0x4137a00000036] op=[[email protected].] output0=[14ConcreteTensorIN5Tdefs9Int32_TCMEE]
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:207: Input 3: id=[0x100200000000] op=[Const] output0=[12TensorSclrDTIL5DType5EE]
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:207: Input 4: id=[0x416c80000001a] op=[Const] output0=[14ConcreteTensorIN5Tdefs5Int32EE]
[ERROR] [Qnn ExecuTorch]: graph_prepare.cc:1573::ERROR:Op 0x40f3900000036 preparation failed with err:-1
[ERROR] [Qnn ExecuTorch]: <E> "aten_view_copy_default_2" generated: could not create op
[ERROR] [Qnn ExecuTorch]: <E> RouterX86 graph prepare failed 12
[ERROR] [Qnn ExecuTorch]: <E> Failed to finalize graph (id: 1) with err 1002
[ERROR] [Qnn ExecuTorch]: Failed to finalize Qnn Graph with error: 1002
[ERROR] [Qnn ExecuTorch]: Fail to compile QNN graph
This can be reproduced with the following test case command or standalone script.
python -m executorch.backends.test.suite.runner models --flow qnn --filter "test_vit_b_16_qnn_float32$"
Standalone repro:
from typing import Tuple
import executorch
import torch
import torchvision
from executorch.backends.qualcomm.utils.utils import (
generate_qnn_executorch_compiler_spec,
generate_htp_compiler_spec,
QcomChipset,
to_edge_transform_and_lower_to_qnn,
)
inputs = (torch.randn(1, 3, 224, 224),)
model = torchvision.models.vit_b_16().eval()
ep = torch.export.export(model, inputs)
backend_options = generate_htp_compiler_spec(
use_fp16=True,
)
compile_spec = generate_qnn_executorch_compiler_spec(
soc_model=QcomChipset.SM8650,
backend_options=backend_options,
)
model = to_edge_transform_and_lower_to_qnn(
model,
inputs,
compile_spec
).to_executorch()
Note that running the backend test case requires executorch's python bindings to be built with the QNN backend. An example build command is below, Note that it will still need the library paths to be set up properly as described in the ET QNN docs.
CMAKE_ARGS="-DEXECUTORCH_BUILD_QNN=ON -DQNN_SDK_ROOT=$QNN_SDK_ROOT" ./install_executorch.sh --editable
Versions
Commit fbda3a9, x86-64 simulator, WSL
Metadata
Metadata
Assignees
Labels
backend testerThis bug was found by the backend test suite.This bug was found by the backend test suite.module: qnnIssues related to Qualcomm's QNN delegate and code under backends/qualcomm/Issues related to Qualcomm's QNN delegate and code under backends/qualcomm/partner: qualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, QualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm