Skip to content

Commit 1b3ad8a

Browse files
committed
fix grammar
Signed-off-by: Chris Abraham <[email protected]>
1 parent 4e16cb1 commit 1b3ad8a

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

_posts/2024-12-10-torchcodec.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ Highlights of torchcodec include:
99

1010

1111

12-
* An intuitive decoding API that treats a video file as a Python sequence of frames. We support both index-based and presentation-time based frame retrieval.
12+
* An intuitive decoding API that treats a video file as a Python sequence of frames. We support both index-based and presentation-time-based frame retrieval.
1313
* An emphasis on accuracy: we ensure you get the frames you requested, even if your video has variable frame rates.
1414
* A rich sampling API that makes it easy and efficient to retrieve batches of frames.
15-
* Best in class CPU decoding performance.
15+
* Best-in-class CPU decoding performance.
1616
* CUDA accelerated decoding that enables high throughput when decoding many videos at once.
1717
* Support for all codecs available in your installed version of FFmpeg.
1818
* Simple binary installs for Linux and Mac.
@@ -90,7 +90,7 @@ Using the following three videos:
9090
2. Same as above, except the video is 120 seconds long.
9191
3. A [promotional video from NASA](https://download.pytorch.org/torchaudio/tutorial-assets/stream-api/NASAs_Most_Scientifically_Complex_Space_Observatory_Requires_Precision-MP4_small.mp4) that is 206 seconds long, 29.7 frames per second and 960x540.
9292

93-
The [experimental script](https://github.com/pytorch/torchcodec/blob/b0de66677bac322e628f04ec90ddeeb0304c6abb/benchmarks/decoders/generate_readme_data.py) is in our repo. Our experiments run on a Linux system with an Intel processor that has 22 available cores and an Nvidia GPU. For CPU decoding, all libraries were instructed to automatically determine the best number of threads to use.
93+
The [experimental script](https://github.com/pytorch/torchcodec/blob/b0de66677bac322e628f04ec90ddeeb0304c6abb/benchmarks/decoders/generate_readme_data.py) is in our repo. Our experiments run on a Linux system with an Intel processor that has 22 available cores and an NVIDIA GPU. For CPU decoding, all libraries were instructed to automatically determine the best number of threads to use.
9494

9595

9696
![Benchmark chart](/assets/images/benchmark_readme_chart.png){:style="width:100%"}
@@ -99,8 +99,8 @@ From our experiments, we draw several conclusions:
9999

100100

101101

102-
* Torchcodec is consistently the best-performing library for the primary use-case we designed it for: decoding many videos at once as a part of a training data loading pipeline. In particular, high-resolution videos see great gains with CUDA where decoding and transforms both happen on the GPU.
103-
* Torchcodec is competitive on the CPU with seek-heavy use-cases such as random and uniform sampling. Currently, torchcodec’s performance is better with shorter videos that have a smaller file size. This performance is due to torchcodec’s emphasis on seek-accuracy, which involves an initial linear scan.
102+
* Torchcodec is consistently the best-performing library for the primary use case we designed it for: decoding many videos at once as a part of a training data loading pipeline. In particular, high-resolution videos see great gains with CUDA where decoding and transforms both happen on the GPU.
103+
* Torchcodec is competitive on the CPU with seek-heavy use cases such as random and uniform sampling. Currently, torchcodec’s performance is better with shorter videos that have a smaller file size. This performance is due to torchcodec’s emphasis on seek-accuracy, which involves an initial linear scan.
104104
* Torchcodec is not as competitive when there is no seeking; that is, opening a video file and decoding from the beginning. This is again due to our emphasis on seek-accuracy and the initial linear scan.
105105

106106
Implementing an [approximate seeking mode](https://github.com/pytorch/torchcodec/issues/427) in torchcodec should resolve these performance gaps, and it’s our highest priority feature for video decoding.

0 commit comments

Comments
 (0)