Skip to content
This repository was archived by the owner on Aug 1, 2025. It is now read-only.

Force RNN modules to be inlined #975

Merged
merged 1 commit into from
Aug 23, 2022
Merged

Force RNN modules to be inlined #975

merged 1 commit into from
Aug 23, 2022

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Aug 23, 2022

They call Tensor.set_ internally with Storage, which is no go for AOTAutograd.
Inline into them so that we can graph break.

Fixes pytorch/functorch#586

Test strategy:

./benchmarks/torchbench.py --inductor  -dcuda --no-skip -k tts_angular

Note that inductor is still failing, but differently, after this PR. My devfair is too wimpy for inductor lol

Signed-off-by: Edward Z. Yang [email protected]

They call Tensor.set_ internally with Storage, which is no go for AOTAutograd.
Inline into them so that we can graph break.

Fixes pytorch/functorch#586

Test strategy:

```
./benchmarks/torchbench.py --inductor  -dcuda --no-skip -k tts_angular
```

Note that inductor is still failing, but differently, after this PR.

Signed-off-by: Edward Z. Yang <[email protected]>
@ezyang ezyang merged commit ab81771 into main Aug 23, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AOT Autograd - LSTM - grads not generated (model tts_angular)
3 participants