-
Notifications
You must be signed in to change notification settings - Fork 7.1k
torchvision.io.read_video() giving memory error #1446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@anandijain you are trying to load a big video in memory, and it doesn't fit your CPU memory. You can try using vision/torchvision/datasets/video_utils.py Lines 45 to 69 in ed5b2dc
from torchvision.datasets.video_utils import VideoClips
video_clips = VideoClips([video_path], clip_length_in_frames=32, frames_between_clips=32) This is what is used internally in the video datasets to return a Dataset compatible with DataLoader: vision/torchvision/datasets/kinetics.py Lines 50 to 78 in ed5b2dc
(don't bother about the arguments starting with a So this should be fairly easy to do once you use |
I have a video file that is 480 x 640, 20 fps, and 20400 frames.
I get a memory error trying to read the video using no endpoints.
I was wondering if there was a way of using
io.read_video()
that returns a DataLoader or is able to work without loading everything into tensors at once.Ideally, I can load in as much as possible each time I
read_video()
.It seems a little cumbersome to have to try to find the point in the video that doesn't throw a memory error, and then iterate over that to get each sections of video that I want.
Code and error:
Does anyone else have a solution to this?
Thanks!
The text was updated successfully, but these errors were encountered: