You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the name implies, libjpeg-turbo is a faster implementation of the JPEG standard. Thus, our I/O ops are simply slower than using Pillow, which hinders adoption.
Recently, @NicolasHug led a push to also use libjpeg-turbo, but hit a few blockers:
Adding conda-forge to the channels for Linux, leads to crazy environment solve times (10+ minutes), which ultimately time out the CI. In general this change should be possible if conda-forge has a lower priority than defaults.
Depending on the experimental libmamba solver indeed speeds ups the solve for the CI to not time out (it is still a little slower than before). Unfortunately, our CI setup does not properly work with it, since a CUDA 11.6 workflow is still pulling a PyTorch version build against CUDA 11.3.
From here on I currently see four options:
Only build and test Windows and macOS binaries against libjpeg-turbo. This would mean that arguably most of our users won't see that speed-up.
Find a way to stop the CI from timing out when using conda-forge as extra channel. This can probably be done through the configuration or by emitting more output during the solve.
Fix our CI setup to work with the libmamba solver.
Package libjpeg-turbo for Linux ourselves. We already use the pytorch or pytorch-nightly channels. If it was available there, we wouldn't need to pull it from conda-forge. In Use libjpeg-turbo in CI instead of libjpeg #5941 (comment)@malfet only talks about testing against it, but maybe we can also build against it.
Uh oh!
There was an error while loading. Please reload this page.
torchvision
is currently buildingvision/setup.py
Line 321 in cac4e22
vision/packaging/torchvision/meta.yaml
Line 13 in cac4e22
and testing against
libjpeg
vision/.circleci/unittest/linux/scripts/environment.yml
Line 10 in cac4e22
vision/.circleci/unittest/windows/scripts/environment.yml
Line 10 in cac4e22
Pillow
is building againstlibjpeg-turbo
on Windows for some time now and sincePillow=9
on all platforms (Jan 2022).This has two downsides for us:
Pillow
as reference for our own decoding and encoding ops. Seetest_encode|write_jpeg_reference
tests #5910libjpeg-turbo
is a faster implementation of the JPEG standard. Thus, our I/O ops are simply slower than usingPillow
, which hinders adoption.Recently, @NicolasHug led a push to also use
libjpeg-turbo
, but hit a few blockers:defaults
channel fromconda
. Unfortunately, ondefaults
libjpeg-turbo
is only available for Windows and macOS.conda-forge
to the channels for Linux, leads to crazy environment solve times (10+ minutes), which ultimately time out the CI. In general this change should be possible ifconda-forge
has a lower priority thandefaults
.libmamba
solver indeed speeds ups the solve for the CI to not time out (it is still a little slower than before). Unfortunately, our CI setup does not properly work with it, since a CUDA 11.6 workflow is still pulling a PyTorch version build against CUDA 11.3.From here on I currently see four options:
libjpeg-turbo
. This would mean that arguably most of our users won't see that speed-up.conda-forge
as extra channel. This can probably be done through the configuration or by emitting more output during the solve.libmamba
solver.libjpeg-turbo
for Linux ourselves. We already use thepytorch
orpytorch-nightly
channels. If it was available there, we wouldn't need to pull it fromconda-forge
. In Use libjpeg-turbo in CI instead of libjpeg #5941 (comment) @malfet only talks about testing against it, but maybe we can also build against it.cc @seemethere
The text was updated successfully, but these errors were encountered: