Skip to content

Conversation

miladm
Copy link
Collaborator

@miladm miladm commented Oct 6, 2022

Testing dynamic shape functionality using a simple model. This PR is meant to identify the gaps in dynamic shape op support. We will keep track of the bugs we address as part of the investigation in this PR.


Blockers

import torch_xla.core.xla_model as xm
import numpy

pd = torch._C._EnablePythonDispatcher()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what this line does

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it enables us to run python implementations of CompositeAutogradImplicit ops.
CompositeAutogradImplicit means we don't have an explicit backward formula for an op instead an op is composed of a bunch of ops that do have backward formulas and combines this formulas is equivalent to differentiating the op explicitly.

@miladm miladm added this to the Dynamic Shape milestone Oct 6, 2022
@ysiraichi ysiraichi added DO_NOT_MERGE Not for merging. and removed DO_NOT_MERGE_YET labels Mar 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DO_NOT_MERGE Not for merging. dynamism Dynamic Shape Features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants