Skip to content

Conversation

YifanShenSZ
Copy link
Collaborator

@YifanShenSZ YifanShenSZ commented Apr 5, 2024

Add support for torch.select_scatter and torch.slice_scatter, polish existing support for torch.index_put_

Polish existing support for torch.copy_ and torch.view along the way

Testing:

  1. GitLab CI
  2. ✅ Locally verified with ExecuTorch tests + torch op unit tests with frontend=TorchFrontend.EXIR

yifan_shen3 added 2 commits April 5, 2024 11:15
…improve index_put; polish copy and reshape ops along the way
TobyRoseman
TobyRoseman previously approved these changes Apr 8, 2024
yifan_shen3 added 2 commits April 8, 2024 17:28
…rror messages and docs, mostly by adding examples
jakesabathia2
jakesabathia2 previously approved these changes Apr 9, 2024
Copy link
Collaborator

@jakesabathia2 jakesabathia2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great PR.
I got two comments left,
otherwise it LGTM

@YifanShenSZ YifanShenSZ merged commit 1317cdb into apple:main Apr 10, 2024
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Apr 11, 2024
Summary:
It was a workaround to skip `aten.index_put` op in Core ML delegation, at the cost of partitioning the Llama model into 13 pieces.

For better performance, we prefer to delegate the whole model to Core ML. Since Core ML has added the [necessary support](apple/coremltools#2190), it is time to revert this workaround

Pull Request resolved: #2975

Reviewed By: kirklandsign

Differential Revision: D56002979

Pulled By: cccclai

fbshipit-source-id: e7a7c8c43706cb57eba3e6f720b3d713bec5065b
pytorchbot pushed a commit to pytorch/executorch that referenced this pull request Apr 19, 2024
Summary:
It was a workaround to skip `aten.index_put` op in Core ML delegation, at the cost of partitioning the Llama model into 13 pieces.

For better performance, we prefer to delegate the whole model to Core ML. Since Core ML has added the [necessary support](apple/coremltools#2190), it is time to revert this workaround

Pull Request resolved: #2975

Reviewed By: kirklandsign

Differential Revision: D56002979

Pulled By: cccclai

fbshipit-source-id: e7a7c8c43706cb57eba3e6f720b3d713bec5065b
(cherry picked from commit 7d4bafc)
guangy10 pushed a commit to pytorch/executorch that referenced this pull request Apr 19, 2024
#3157)

Summary:
It was a workaround to skip `aten.index_put` op in Core ML delegation, at the cost of partitioning the Llama model into 13 pieces.

For better performance, we prefer to delegate the whole model to Core ML. Since Core ML has added the [necessary support](apple/coremltools#2190), it is time to revert this workaround

Pull Request resolved: #2975

Reviewed By: kirklandsign

Differential Revision: D56002979

Pulled By: cccclai

fbshipit-source-id: e7a7c8c43706cb57eba3e6f720b3d713bec5065b
(cherry picked from commit 7d4bafc)

Co-authored-by: yifan_shen3 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants