Skip to content

CUDA: compress-mode size #12029

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 1, 2025
Merged

Conversation

Green-Sky
Copy link
Collaborator

@Green-Sky Green-Sky commented Feb 22, 2025

This patch sets cuda compression mode to size for >= 12.8

cuda 12.8 added the option to specify stronger compression for binaries.

I ran some tests in CI with the new ubuntu 12.8 docker image:

89-real arch

In this scenario, it appears it is not compressing by default at all?

mode ggml-cuda.so
none 64M
speed (default) 64M
balance 64M
size 18M

60;61;70;75;80 arches

mode ggml-cuda.so
none 994M
speed (default) 448M
balance 368M
size 127M

I did not test the runtime load overhead this should incur. But for most ggml-cuda usecases, the processes are usually long(er) lived, so the trade-off seems reasonable to me.

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Feb 22, 2025
@Green-Sky Green-Sky marked this pull request as ready for review February 24, 2025 12:21
@slaren
Copy link
Member

slaren commented Feb 26, 2025

994M

That's quite a lot, I didn't realize that the build with all supported archs has gotten so bad. In the windows releases it seems to be 500M, so it's not that bad, but still pretty bad.

I am not exactly sure what may be the downsides of enabling this option, it would be preferable if this was optional. Enabling it by default should be ok, though.

@Green-Sky
Copy link
Collaborator Author

994M

That's quite a lot, I didn't realize that the build with all supported archs has gotten so bad. In the windows releases it seems to be 500M, so it's not that bad, but still pretty bad.

And so it is for linux. Even before 12.8 it was compressing by default. Either with a speed equivalent or it's the same code and they just decided to give more control over the compression algorithm. Before 12.8 there only existed an option to disable compression, which I don't think anyone uses.

I am not exactly sure what may be the downsides of enabling this option, it would be preferable if this was optional. Enabling it by default should be ok, though.

They say it costs startup time, which I think would be ok for almost all ml usecases that use cuda anyway. I just hope it's not for every kernel launch. I don't have a setup right now where I can test that myself, so if anyone can help here, that would be nice.

Ok, I will make it an ggml option and enable it by default. Or should I make the option a string and just pass that? (none, speed, balance, size)

@slaren
Copy link
Member

slaren commented Feb 27, 2025

Or should I make the option a string and just pass that?

Yes, that sounds good to me.

cuda 12.8 added the option to specify stronger compression for binaries.
@Green-Sky Green-Sky merged commit 80c41dd into ggml-org:master Mar 1, 2025
47 checks passed
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
cuda 12.8 added the option to specify stronger compression for binaries, so we now default to "size".
# - speed (nvcc's default)
# - balance
# - size
list(APPEND CUDA_FLAGS -compress-mode=${GGML_CUDA_COMPRESSION_MODE})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be --compress-mode instead? #12325

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

according to the cuda docu both are accepted, and i chose single dash, because the other options next to it are also single dash.

arthw pushed a commit to arthw/llama.cpp that referenced this pull request Mar 19, 2025
cuda 12.8 added the option to specify stronger compression for binaries, so we now default to "size".
@ProteanCode
Copy link

If somebody has this error, a pair of CUDA 12.8 and GCC 12 solved my issue

This Ubuntu shell script helped me to set things up

export CC=/usr/bin/gcc-12
export CUDA_HOME=/usr/local/cuda-12.8
export PATH=$CUDA_HOME/bin:$PATH
export CUDACXX=$CUDA_HOME/bin/nvcc
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH

sudo apt-get update
sudo apt-get install -y build-essential cmake

pip uninstall -y llama-cpp-python llama-cpp-python-cuda

CMAKE_ARGS="-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=75" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

The gcc-12 -v output

Using built-in specs.
COLLECT_GCC=gcc-12
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/12/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:amdgcn-amdhsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 12.3.0-1ubuntu1~22.04' --with-bugurl=file:///usr/share/doc/gcc-12/README.Bugs --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-12 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --enable-libphobos-checking=release --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --enable-cet --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none=/build/gcc-12-ALHxjy/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-ALHxjy/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-offload-defaulted --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 12.3.0 (Ubuntu 12.3.0-1ubuntu1~22.04) 

The /usr/local/cuda-12.8/bin/nvcc --version output

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:23:50_PST_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0

There is one thing, that by default my nvcc points to version 11, so just typing nvcc --version without the path to 12.8 results with

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

Which was likely the error, I tried to compile various versions with GCC-11, but for CUDA 12.8 I needed GCC-12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants