Skip to content

[video] exposing _backend in datasets #1387

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

[video] exposing _backend in datasets #1387

wants to merge 1 commit into from

Conversation

bjuncek
Copy link
Contributor

@bjuncek bjuncek commented Sep 30, 2019

A simple PR that exposes the new experimental backend for training purposes. Using it allows for an overall speedup in multiGPU setups.

Specifically, I found the new backend to be:

  1. on average 0.2s faster per iteration for equivalent model
  2. on average 7min (stdev: 3) minutes faster per training epoch

The tests were done using 64 GPU setup across 8 machines, for 4 different models.

@fmassa
Copy link
Member

fmassa commented Sep 30, 2019

Thanks for the PR @bjuncek !

Can you mention how much relative speed up we got from changing backends? Like, training went from 1.2s / iter to 1.0 s / iter?

Also, I'm thinking that I'd prefer to avoid having to pass a _backend to each function that uses a video_reader, but my ideal preference would be to just let VideoClips use the torchvision.set_video_backend internally.

Thoughts?

@bjuncek
Copy link
Contributor Author

bjuncek commented Sep 30, 2019

Can you mention how much relative speed up we got from changing backends? Like, training went from 1.2s / iter to 1.0 s / iter?

For the smallest difference: training on average went from 1.17s/iter to 0.94s/iter. For the largest one it went from 1.20s/iter to 0.88s/iter.

The differences depend on a) gpu/cpu ratio, b) model c) num_workers hyperparameter.


but my ideal preference would be to just let VideoClips use the torchvision.set_video_backend internally

I agree, but wouldn't that cause a rather big overhaul?

@fmassa
Copy link
Member

fmassa commented Oct 3, 2019

Subsumed by #1376

I agree, but wouldn't that cause a rather big overhaul?

I don't think this is going to require that many changes, and IMO seems better. Let me know if you disagree

@fmassa fmassa closed this Oct 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants