RuntimeError: Trying to backward through the graph a second time #8549
Answered
by
ethanwharris
Keiku
asked this question in
code help: CV
-
I'm migrating my repository to pytorch-lightning and I get the following error: RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time. The CNNLSTM model seems to be the problem, what should I do? [My repository] |
Beta Was this translation helpful? Give feedback.
Answered by
ethanwharris
Jul 26, 2021
Replies: 1 comment 12 replies
-
Hi @Keiku This error happens if you try to call backward on something twice in a row without calling optimizer.step. Are you able to share your LightningModule code? It looks like the code in your repo just uses vanilla pytorch. Thanks 😃 |
Beta Was this translation helpful? Give feedback.
12 replies
Answer selected by
Keiku
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @Keiku This error happens if you try to call backward on something twice in a row without calling optimizer.step. Are you able to share your LightningModule code? It looks like the code in your repo just uses vanilla pytorch.
Thanks 😃