Skip to content

[EXIR] Register _clone_dim_order op and map aten.clone #12971

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

keyprocedure
Copy link
Contributor

@keyprocedure keyprocedure commented Jul 29, 2025

Summary

This is PR 2 of 3 implementing a dim order aware clone op.

This PR registers the new _clone_dim_order op and maps aten.clone ops to dim_order_ops._clone_dim_order during export to preserve memory layout changes (contiguous/channels_last).

Related PRs:

  • PR 1: #12974 - Add _clone_dim_order portable kernel
  • PR 3: #12976 - Update RemoveCloneOpsTransform to be dim order aware

Fixes #12645

Test plan

  • Operator level tests to verify kernel behavior for layout preservation and changes.
  • Graph level checks to confirm that clone mapping occurs.
  • End to end tests to validate that functional clone behavior is unchanged.

All tests pass via:
python -m unittest exir.tests.test_memory_format_ops_pass

Copy link

pytorch-bot bot commented Jul 29, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12971

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures, 1 Unrelated Failure

As of commit fe7dd11 with merge base 85e5b6e (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 29, 2025
Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Left a minor feedback about test coverage!

@@ -389,3 +390,17 @@ def test_mobilenet_v3_xnnpack(self) -> None:
rtol=1e-3,
),
)

def test_op_clone_dim_order_registration(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you follow other testcases way (e.g, test_op_dim_order_propagation), to reuse the memory_format_test_runner function and test at least two different scenarios: one for maintaining dim order, the other is converting the dim order?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can definitely add more tests, the reason I didn't here was because the kernel implementation wasn't in place for this PR. I'll work on adding additional coverage.

@keyprocedure
Copy link
Contributor Author

@pytorchbot label "release notes: none"

@pytorch-bot pytorch-bot bot added the release notes: none Do not include this in the release notes label Jul 29, 2025
@keyprocedure
Copy link
Contributor Author

@Gasoonjia splitting the clone_dim_order op implementation into 3 PRs was a great suggestion, it makes it much easier to isolate and debug issues.

The CI failures are mainly due to two issues:

  1. clone is now mapped to clone_dim_order, so tests expecting executorch_exir_dialects_edge__ops_aten_clone_default are failing because they now see executorch_exir_dialects_edge__ops_dim_order_ops__clone_dim_order_default.
  2. Because of the first issue, the lack of a kernel implementation for clone_dim_order in this PR is also triggering failures.

Do you have any suggestions for how we should handle the current approach of mapping clone to clone_dim_order?

@Gasoonjia
Copy link
Contributor

Hi @keyprocedure thanks for your update!

For places checking the executorch_exir_dialects_edge__ops_aten_clone_default, please update them to check executorch_exir_dialects_edge__ops_dim_order_ops__clone_dim_order_default as long as the compile_config.skip_dim_order == False in their pipeline.

the lack of a kernel implementation for clone_dim_order

can you share me more insights here? I think you've create clone_dim_order kernel in aot side by reusing aten clone op.

@keyprocedure
Copy link
Contributor Author

keyprocedure commented Jul 31, 2025

@Gasoonjia during export _clone_dim_order does use ATen's clone implementation because of the Python fallback, but at runtime the graph contains _clone_dim_order and since no kernel is registered for it, it fails with:
[operator_registry.cpp:256] kernel 'dim_order_ops::_clone_dim_order.out' not found.

This is also why I placed the tests using memory_format_test_runner in the kernel PR instead of this one.

What are your thoughts?

@Gasoonjia
Copy link
Contributor

@keyprocedure thanks for detailed reply!

let's swap the order of PRs. We can first finish the runtime op, then aot replacement, finally RemoveCloneOpsTransform.
That would solve the runtime operator requirement issue.

@keyprocedure
Copy link
Contributor Author

@keyprocedure thanks for detailed reply!

let's swap the order of PRs. We can first finish the runtime op, then aot replacement, finally RemoveCloneOpsTransform. That would solve the runtime operator requirement issue.

That's a great idea! Should I move the tests from the runtime/kernel PR (since the new op won't be registered yet) to the aot replacement PR?

@Gasoonjia
Copy link
Contributor

Gasoonjia commented Jul 31, 2025

@keyprocedure thanks for detailed reply!
let's swap the order of PRs. We can first finish the runtime op, then aot replacement, finally RemoveCloneOpsTransform. That would solve the runtime operator requirement issue.

That's a great idea! Should I move the tests from the runtime/kernel PR (since the new op won't be registered yet) to the aot replacement PR?

Yes please! More specific:

for the runtime pr, it should only contain:

  1. the portable clone_dim_order op
  2. the operator test itself

for aot pr, it should contains both graph-level check, and the end2end test using pybinding.

@keyprocedure keyprocedure changed the title [EXIR] Add dim order aware clone operator [EXIR] Map clone to _clone_dim_order Aug 3, 2025
@keyprocedure
Copy link
Contributor Author

keyprocedure commented Aug 4, 2025

Hi @Gasoonjia

I’ve updated the PRs, the kernel PR is ready for review.

The CI failures from this PR that weren’t due to the missing _clone_dim_order kernel are all ARM related runtime failures:

  • Expected to not find "torch.ops.higher_order.executorch_call_delegate" but found it
  • Expected to find "torch.ops.higher_order.executorch_call_delegate" but did not find it
  • Expected out tensor to have dtype signed char, but got float instead
  • Expected to find "executorch_exir_dialects_edge__ops_aten_clone_default" but did not find it

It seems like these failures share the same root cause: aten.clone being replaced by _clone_dim_order, but since the failures are spread across quite a few tests, I'm not sure which strategy we should use to address this.
Does it make sense to individually try to correct each failing test or is there a way to gate the ARM backend to still use the
aten.clone op?

Any insights or guidance would be helpful

@Gasoonjia
Copy link
Contributor

Gasoonjia commented Aug 4, 2025

@keyprocedure
Thanks for sharing me the error msg!
I've left some comment for the kernel PR, please take a look.

For the issue in this PR, can you share me the code pointer to the issue? Like which line raising that?

@keyprocedure
Copy link
Contributor Author

There were a total of 49 failing tests for the initial CI run, so I listed out the test and line number of each failure:

unittest-arm-backend-with-no-fvp (test_pytest_ops) / linux-job

19 failures

backends/arm/test/models/stable_diffusion/test_T5EncoderModel.py:100:
    TestT5EncoderModel.test_T5EncoderModel_tosa_BI

"AssertionError: Invalid partition, found dependency cycles"
backends/arm/test/models/stable_diffusion/test_T5EncoderModel.py:80:
    TestT5EncoderModel.test_T5EncoderModel_tosa_MI

"IndexError: list index out of range"
backends/arm/test/models/stable_diffusion/test_CLIPTextModelWithProjection.py:76:
    TestCLIPTextModelWithProjection.test_CLIPTextModelWithProjection_tosa_MI

backends/arm/test/models/stable_diffusion/test_SD3Transformer2DModel.py:128:
    TestSD3Transformer2DModel.test_SD3Transformer2DModel_tosa_BI

backends/arm/test/models/stable_diffusion/test_SD3Transformer2DModel.py:105:
    TestSD3Transformer2DModel.test_SD3Transformer2DModel_tosa_MI

backends/arm/test/models/stable_diffusion/test_vae_AutoencoderKL.py:74:
    TestAutoencoderKL.test_AutoencoderKL_tosa_BI

backends/arm/test/models/stable_diffusion/test_vae_AutoencoderKL.py:55:
    TestAutoencoderKL.test_AutoencoderKL_tosa_MI

backends/arm/test/models/test_conformer.py:60:
    test_conformer_tosa_MI

backends/arm/test/models/test_deit_tiny_arm.py:45:
    test_deit_tiny_tosa_MI

backends/arm/test/models/test_deit_tiny_arm.py:58:
    test_deit_tiny_tosa_BI

backends/arm/test/models/test_dl3_arm.py:44:
    test_dl3_tosa_MI

backends/arm/test/models/test_dl3_arm.py:57:
    test_dl3_tosa_BI

backends/arm/test/models/test_mobilenet_v2_arm.py:45:
    test_mv2_tosa_MI

backends/arm/test/models/test_mobilenet_v2_arm.py:60:
    test_mv2_tosa_BI[per_channel_quantization=true]
    test_mv2_tosa_BI[per_channel_quantization=false]

backends/arm/test/models/test_mobilenet_v2_arm.py:78:
    test_mv2_u55_BI[per_channel_quantization=true]
    test_mv2_u55_BI[per_channel_quantization=false]

backends/arm/test/models/test_mobilenet_v2_arm.py:96:
    test_mv2_u85_BI[per_channel_quantization=true]
    test_mv2_u85_BI[per_channel_quantization=false]

"RuntimeError: Expected to not find "torch.ops.higher_order.executorch_call_delegate" but found it"

The CI log also shows multiple TOSA partitioning rejections due to _clone_dim_order not being in BaseTOSASupportList.


unittest-arm-backend-with-no-fvp (test_pytest_models) / linux-job

30 failures

backends/arm/test/ops/test_alias_copy.py:63:
    test_alias_tosa_BI[1d_ramp]
    test_alias_tosa_BI[2d_ones]
    test_alias_tosa_BI[3d_rand]
    test_alias_tosa_BI[4d_zeros]

"RuntimeError: Expected out tensor to have dtype signed char, but got float instead"
backends/arm/test/ops/test_clone.py:58:
    test_clone_tosa_MI[ones_1D_10]
    test_clone_tosa_MI[ones_1D_50]
    test_clone_tosa_MI[rand_1D_20]
    test_clone_tosa_MI[rand_2D_10x10]
    test_clone_tosa_MI[rand_3D_5x5x5]
    test_clone_tosa_MI[rand_4D_2x3x4x5]
    test_clone_tosa_MI[large_tensor]

backends/arm/test/ops/test_clone.py:69:
    test_clone_tosa_BI[ones_1D_10]
    test_clone_tosa_BI[ones_1D_50]
    test_clone_tosa_BI[rand_1D_20]
    test_clone_tosa_BI[rand_2D_10x10]
    test_clone_tosa_BI[rand_3D_5x5x5]
    test_clone_tosa_BI[rand_4D_2x3x4x5]
    test_clone_tosa_BI[large_tensor]

backends/arm/test/passes/test_remove_clone_pass.py:43:
    test_remove_clone_tosa_BI

"RuntimeError: Expected to find "torch.ops.higher_order.executorch_call_delegate" but did not find it"
backends/arm/test/ops/test_multihead_attention.py:48:
    test_multihead_attention_tosa_MI[rand_2d]
    test_multihead_attention_tosa_MI[randn_2d]
    test_multihead_attention_tosa_MI[randn_3d]

backends/arm/test/ops/test_multihead_attention.py:65:
    test_multihead_attention_tosa_BI[rand_2d]
    test_multihead_attention_tosa_BI[randn_2d]
    test_multihead_attention_tosa_BI[randn_3d]

backends/arm/test/ops/test_multihead_attention.py:108:
    test_multihead_attention_u85_BI[rand_2d]
    test_multihead_attention_u85_BI[randn_2d]
    test_multihead_attention_u85_BI[randn_3d]

backends/arm/test/ops/test_repeat.py:74:
    test_repeat_tosa_MI[interleave_int_3_x_1]

"RuntimeError: Expected to not find "torch.ops.higher_order.executorch_call_delegate" but found it"
backends/arm/test/ops/test_repeat.py:86:
    test_repeat_tosa_BI[interleave_int_3_x_1]

"RuntimeError: Expected out tensor to have dtype signed char, but got float instead"

@keyprocedure
Copy link
Contributor Author

I think we should land the kernel PR before addressing any new failures from the latest CI run in this PR since this PR is dependent on the kernel existing for _clone_dim_order.

@Gasoonjia
Copy link
Contributor

sure let's land prs one by one

@keyprocedure keyprocedure changed the title [EXIR] Map clone to _clone_dim_order [EXIR] Register _clone_dim_order op and map aten.clone to it Aug 6, 2025
@keyprocedure keyprocedure changed the title [EXIR] Register _clone_dim_order op and map aten.clone to it [EXIR] Register _clone_dim_order op and map aten.clone Aug 6, 2025
Gasoonjia added a commit that referenced this pull request Aug 11, 2025
### Summary
This is PR 1 of 3 implementing a dim order aware clone op. 

Currently, clone ops are removed during export as no-ops, causing memory
layout (dim order) changes to be lost. This can cause backend failures,
incorrect outputs when ops expect specific layouts, and performance
degradation. This set of PRs introduces a dim order aware clone op,
`_clone_dim_order`, which preserves memory layout changes by explicitly
storing dim order information. This is implemented by replacing standard
clone ops with this variant during export and updating the clone removal
transform to preserve clones that change layout.

This PR adds the portable CPU kernel for the `_clone_dim_order` op,
implementing a clone variant that preserves dim order at runtime. The
portable kernel validates dtype and layout compatibility, resizes the
output tensor if needed, and performs an element wise clone of the
tensors.

Note: A future PR will add the ATen kernel for `_clone_dim_order`.

Related PRs:
- PR 2: [#12971](#12971) -
Register `_clone_dim_order` op and map `aten.clone`
- PR 3: [#12976](#12976) -
Update RemoveCloneOpsTransform to be dim_order aware

Fixes #12645 

### Test plan
Added kernel runtime tests to verify:
- Tensors of all real dtypes are cloned correctly.
- Failure when input and output tensor shapes mismatch.
- Failure with unsupported memory formats.
- Failure when `non_blocking=true` since the portable kernel only
supports blocking data transfer.
- Dynamic shape outputs are cloned with correct values.
- Layout conversions are cloned correctly for `contiguous` to
`channels_last`, `channels_last` to `contiguous`, and `channels_last` is
preserved.

All runtime tests pass via:
`build-ninja/kernels/test/portable_kernels_test`

---------

Co-authored-by: Gasoonjia <[email protected]>
@keyprocedure keyprocedure marked this pull request as ready for review August 11, 2025 20:33
Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks for your wonderful work!
Will stamp the PR after ci passed!

@@ -153,6 +205,29 @@ def test_op_dim_order_propagation_ambiguous(self) -> None:
except AmbiguousDimOrderError:
pass # Expected error

def test_op_clone_dim_order_graph_replacement(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this test! It means you have strong understanding of what we are doing in MemoryFormatPass!
Just wanna share with you that such check will be automatically happen inside to_edge function, as long as the EdgeCompileConfig._check_ir_validity == True. You can see https://github.com/pytorch/executorch/blob/main/exir/verification/verifier.py#L313 for more info.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh okay, that makes sense. I'll remove this test since the other tests I added use memory_format_test_runner, which calls to_edge and _check_ir_validity defaults to True.

@keyprocedure
Copy link
Contributor Author

keyprocedure commented Aug 13, 2025

@Gasoonjia
Could you take a look at the changes I made to fix the ARM failures from the last CI run when you have a chance? I added _clone_dim_order to the TOSA support check registry and updated the tests. Is TOSA available publicly? I couldn’t run the tests locally because I couldn’t source the package.

@@ -14,7 +14,11 @@ class RemoveClonePass(ExportPass):
"""Remove all clones from graph_module"""

def call_operator(self, op, args, kwargs, meta):
if op != exir_ops.edge.aten.clone.default:
clone_ops = (
exir_ops.edge.aten.clone.default,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need aten.clone we should be removing this when producing the edge dialect graph, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because isn't aten.clone only mapped to _clone_dim_order if _skip_dim_order=False?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that's a good point. Can we assume _skip_dim_order is always False in ARM? If so, we can replace all instances of aten.clone with _clone_dim_order. If not, we'd need to make the clone tests check for the correct clone op based on the flag's value. I also assumed that all clones are removed in ARM regardless of whether the dim order changes, so I didn’t add a dim_order check in the removal pass.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should be removing this when producing the edge dialect graph, no

aten.clone will be replaced only when _skip_dim_order = False. If we do not force Arm turning on dim order op i would love to keep @keyprocedure's work.

Copy link
Contributor

@digantdesai digantdesai Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

who else is using _skip_dim_order = True? If no one then we can remove it. Arm isn't using it, but supports it I guess.

@Gasoonjia - we should remove this flag altogether :|



@register_tosa_support_check
class CloneDimOrderSupport(SupportedTOSAOperatorCheck):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we remove support to partition aten.clone op?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We didn't, aten.clone is still in the BaseTOSASupportList and supported. CloneDimOrderSupport would only apply when cloning takes place with _skip_dim_order=False, otherwise aten.clone would still exist.

@@ -67,6 +68,28 @@ def _to_dim_order_copy(context, node):
to(context, node)


@register_torch_op(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did we remove support to partition aten.clone op?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't explicitly remove support for aten.clone. From my understanding this change would only add support for _clone_dim_order, aten.clone is still supported and handled by the noop handler in coremltools.

Copy link
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good for the most part. Left some comments.

@keyprocedure
Copy link
Contributor Author

keyprocedure commented Aug 16, 2025

@digantdesai it looks like most of the CI failures were due to _clone_dim_order not being fully registered in TOSA. I had missed including clone_dim_order_support in operator_support/init.py. I've added that registration and also registered a node visitor for _clone_dim_order. Does it make sense to lower the identity since only contiguous dim orders are supported for now?

Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, though there're some Arm question maybe we need to wait for @digantdesai for final decision.

@@ -14,7 +14,11 @@ class RemoveClonePass(ExportPass):
"""Remove all clones from graph_module"""

def call_operator(self, op, args, kwargs, meta):
if op != exir_ops.edge.aten.clone.default:
clone_ops = (
exir_ops.edge.aten.clone.default,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should be removing this when producing the edge dialect graph, no

aten.clone will be replaced only when _skip_dim_order = False. If we do not force Arm turning on dim order op i would love to keep @keyprocedure's work.

@@ -38,7 +38,7 @@
]
linear_residual_exir_op: list[str] = [
"executorch_exir_dialects_edge__ops_aten_gelu_default",
"executorch_exir_dialects_edge__ops_aten_clone_default",
"executorch_exir_dialects_edge__ops_dim_order_ops__clone_dim_order_default",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing aten.clone here because the corresponding tests is dim order only?
@digantdesai will Arm use dim order mandatorily? If we want to continue support both aten.clone and dim_order.clone, lets keep the original test while have a new test for dim_order case.
Same as other tests

Copy link
Contributor Author

@keyprocedure keyprocedure Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since ARM's pass manager doesn't seem to pass the _skip_dim_order flag, it defaults to False, so my strategy was to replace all instances of aten.clone with _clone_dim_order in the tests. Although it might make more sense to explicitly set _skip_dim_order=True in the EdgeCompileConfig calls in arm_tester and undo my changes to the ARM test files. Then I can either continue with adding _clone_dim_order support and tests (following Gasoonjia’s suggestion) in TOSA for future use or leave it out for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah my point is we should make our update in line with Arm's target in a consistent way.

That is, if we expect dim-order-only in Arm backend, there should be no aten.clone in our PR and we should remove aten.clone supports.

However if we should support dim_order equals to both on and off, we may test both case for clone operator, though I'm ok to split it into several PRs for better structure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense, thanks for breaking down the options.

Copy link
Contributor

@digantdesai digantdesai Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI see - 135e875

Also cc @oscarandersson8218 for what if no aten.clone support.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@digantdesai thanks for looking into this and sharing the enable dim_order PR. It looks like the best strategy is to replace support for aten.clone with _clone_dim_order in ARM since the _skip_dim_order flag will always be False, following @Gasoonjia's suggestion.

@@ -67,6 +68,28 @@ def _to_dim_order_copy(context, node):
to(context, node)


@register_torch_op(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

@oscarandersson8218 oscarandersson8218 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@keyprocedure Nice change and thanks for updating our backend!



@register_node_visitor
class CloneDimOrderVisitor(NodeVisitor):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If RemoveClonePass works as it should there should be no need for a NodeVisitor as all clones should be removed when we end up here. Also, this node_visitor is not imported in __init__.py and is therefore not registered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out, I'll remove this NodeVisitor. Just to confirm, if a clone changes dim_order, should the RemoveClonePass still be expected to remove it?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the check in clone_dim_order_support.py check will make sure that we don't partition a clone that changes dim_order so it should be ok to me.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To expand on this, the plan for dim_order handling in the arm backend is to ignore all delegated dim_order ops (no matter if it changes dim_order or not) and just make sure that the input/output of the delegated sub-graph graph is the correct, since we do our own dim_order handling in https://github.com/pytorch/executorch/blob/main/backends/arm/_passes/annotate_channels_last_dim_order_pass.py. The dtype casting of the to_dim_order_copy is kept though.

We will not delegate partitions containing single dim_order ops to allow the use of dim_order ops in non delegated sub-graphs, similar to this PR: #12995

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really appreciate the context and resources on how dim_order ops are handled. This also helps with diagnosing the graph delegation CI failures.

@Gasoonjia
Copy link
Contributor

hi @keyprocedure looks like there're still several issues on ci. Do you mind take a look?

@keyprocedure
Copy link
Contributor Author

hi @keyprocedure looks like there're still several issues on ci. Do you mind take a look?

Hi, yeah, currently working on them. Appreciate the check in :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add dim order variant clone operator
5 participants