You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I ran language model example, generate.py blew up GPU memory as it generated sentences (starting from ~500MB to ~4GB). In the end I got out of memory error: RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.5_1479441063232/work/torch/lib/THC/generic/THCStorage.cu:65 .
If you're doing inference only, you might need to set volatile=True. Otherwise you often end up with reference cycles which aren't collected immediately. (This is something we should fix in PyTorch)
When I ran language model example,
generate.py
blew up GPU memory as it generated sentences (starting from ~500MB to ~4GB). In the end I got out of memory error:RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.5_1479441063232/work/torch/lib/THC/generic/THCStorage.cu:65
.Some info:
cc: @adamlerer
The text was updated successfully, but these errors were encountered: