You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Modify the build process for Linux to set an RPATH on the shared libraries to help locate the CUDA libraries in use by PyTorch. The goal is to make installation more painless.
Currently, the binaries have a RUNPATH based on the build system's configuration where the CUDA libraries are located at /usr/local/cuda/lib64:
Users may not necessarily have the appropriate CUDA Toolkit installed in this location, or even at all. Sometimes things work, and other times we have docs/error messages instructing to use LD_LIBRARY_PATH to try and resolve that. There's some extra work on top of that in the CUDASetup where additional paths from env are considered, e.g.CONDA_PREFIX, CUDA_PATH, and others.
Instead, we should consider an RPATH using $ORIGIN, similar to PyTorch:
In the case of bitsandbytes, the RPATH should include:
$ORIGIN/../../nvidia/cuda_runtime/lib - for libcudart.so
$ORIGIN/../../nvidia/cublas/lib - for libcublas.so, libcublasLt.so
$ORIGIN/../../nvidia/cusparse/lib - for libcusparse.so
Note: PyTorch wheels installed with nvidia packages this way since ~1.13 (TODO: confirm). It might be reasonable to set that as the minimum requirement if necessary. I'm not sure yet if the typical layout from conda is different so that needs to be determined as well.
Motivation
The motivation is to make the library more accessible and easier to install on a wider range of system configurations.
Testing Across Scenarios: After implementing the RPATH change, test Bitsandbytes in at least: (1) a fresh machine with no CUDA toolkit but with PyTorch installed (pip), (2) a conda environment with only conda PyTorch, (3) a system with an outdated CUDA toolkit installed and see that Bitsandbytes still loads the intended libs (from pip or conda, not the old system ones). Also test that if the nvidia folders are missing (simulate PyTorch <1.13 or user didn’t get the deps), Bitsandbytes fails gracefully with an informative error (perhaps instructing to upgrade PyTorch or install CUDA). Ideally, Bitsandbytes’ Python code can detect this scenario and fall back to the old behavior: e.g., if libbitsandbytes.so fails to load, try loading via ctypes.CDLL with a manual search in CUDA_PATH or raise a clear message. This would cover any corner case without leaving the user confused.
Feature request
Modify the build process for Linux to set an RPATH on the shared libraries to help locate the CUDA libraries in use by PyTorch. The goal is to make installation more painless.
Currently, the binaries have a RUNPATH based on the build system's configuration where the CUDA libraries are located at
/usr/local/cuda/lib64
:Users may not necessarily have the appropriate CUDA Toolkit installed in this location, or even at all. Sometimes things work, and other times we have docs/error messages instructing to use
LD_LIBRARY_PATH
to try and resolve that. There's some extra work on top of that in theCUDASetup
where additional paths from env are considered, e.g.CONDA_PREFIX
,CUDA_PATH
, and others.Instead, we should consider an RPATH using $ORIGIN, similar to PyTorch:
In the case of
bitsandbytes
, the RPATH should include:$ORIGIN/../../nvidia/cuda_runtime/lib
- for libcudart.so$ORIGIN/../../nvidia/cublas/lib
- for libcublas.so, libcublasLt.so$ORIGIN/../../nvidia/cusparse/lib
- for libcusparse.soNote: PyTorch wheels installed with nvidia packages this way since ~1.13 (TODO: confirm). It might be reasonable to set that as the minimum requirement if necessary. I'm not sure yet if the typical layout from conda is different so that needs to be determined as well.
Motivation
The motivation is to make the library more accessible and easier to install on a wider range of system configurations.
Possibly related issues:
#1073 and several others on this repo
pytorch/pytorch#101314
unslothai/unsloth#200
unslothai/unsloth#221
Your contribution
I plan to submit a PR for this. There's some potential overlap with #1041 as well.
The text was updated successfully, but these errors were encountered: