-
Notifications
You must be signed in to change notification settings - Fork 12k
Can't compile "llama.cpp/ggml-quants.c" #3880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I see you're trying to build with MPI support, that was broken around a month ago, and I've been working on a fix in #3334. Doesn't seem to be the cause of your issues here, but figured I'd point it out so you aren't surprised when it blows up after running |
Okay. So my solution was to implement the missing declarations myself, by loading the 4 chunks of 16 unsigned 8-bit integers into a 128-bit NEON register. Here's the patch.
|
Hi, I am encountering this issue as well when trying to build on 4GB Jetson Nano. Is there a fix or patch yet? |
Same here. Any thoughts would be appreciated. |
I have found this (#4123) which suggests to install gcc 8.5 from source, haven't finished trying yet. 🤷🏻♂️🤷🏻♂️ |
compiling gcc 8.5 from source right now. will let everybody know if it works in a couple of hours. |
After a couple of tweaks, I managed to make this work. Be sure to:
|
Please also note that maybe you have to run your CUDA-enabled as sudo (some programs running as a regular user were unable to detect my CUDA setup, dont ask me why ) |
Can you please tell me where you put the “ -fPIC” compilation flag in that one file? I am getting errors when following your steps, after compiling gcc 8.5 and setting that gcc in my path. I put -fPIC as the first option for C_FLAGS in the ggml.dir/flags.make file Terminal Output: |
Try editing this file after running cmake: build/CMakeFiles/ggml.dir/flags.make ... |
It worked for me!! Thank you. BTW second place to put -fPIC is "CUDA_FLAGS." Had never seen a flags.make file before so I was confused. Thanks again! |
Hey I'm also trying to compile on an Nvidia Jetson, but without MPI, however if I use this method I still get the same errors in gcc is 8.5 self compiled
|
Try using gcc 8.5, as posted above |
THIS
|
I have done all that already, but still get the errors
|
but if you look your cmake output, it is still using gcc 7.5
PAULO MANNHEIMERDIRETOR ***@***.*** capitais:
4063-6100Demais regiões: (11) 4063-6100
<https://api.whatsapp.com/send/?phone=551140636100&text=Ol%C3%A1%21+Tudo+bem%3F&type=phone_number&app_absent=0>
www.instant.com.br <https://instant.com.br>
<https://www.instagram.com/instant.br/>
<https://www.facebook.com/Instant.br/>
<https://www.linkedin.com/company/instant-solutions/>
…On Thu, Jan 11, 2024 at 11:00 AM Rover van der Noort < ***@***.***> wrote:
gcc is 8.5 self compiled
I have done all that already, but still get the errors
gcc --version
gcc (GCC) 8.5.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
—
Reply to this email directly, view it on GitHub
<#3880 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AE7GYLP5EM6YQXYTSLKYTETYN7WBNAVCNFSM6AAAAAA6YVYQUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBXGIZDANRVGM>
.
You are receiving this because you commented.Message ID: <ggerganov/llama.
***@***.***>
|
Alright thanks, that got me further again, however still an error unfortunately.
|
sorry ... never seen this one :-)
you may want to inspect the source code of ggml-cuda.cu and find out why
these variables - hmax and hmax2 - are undefined.
come back if you are unable.
PAULO MANNHEIMERDIRETOR ***@***.*** capitais:
4063-6100Demais regiões: (11) 4063-6100
<https://api.whatsapp.com/send/?phone=551140636100&text=Ol%C3%A1%21+Tudo+bem%3F&type=phone_number&app_absent=0>
www.instant.com.br <https://instant.com.br>
<https://www.instagram.com/instant.br/>
<https://www.facebook.com/Instant.br/>
<https://www.linkedin.com/company/instant-solutions/>
…On Thu, Jan 11, 2024 at 11:16 AM Rover van der Noort < ***@***.***> wrote:
Alright thanks, that got me further again, however still an error
unfortunately.
export CC=/usr/local/bin/gcc
export CXX=/usr/local/bin/g++
cmake .. -DLLAMA_CUBLAS=1 -DCMAKE_CUDA_COMPILER=/usr/local/cuda-10.2/bin/nvcc
-- The C compiler identification is GNU 8.5.0
-- The CXX compiler identification is GNU 8.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/local/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.17.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Found CUDAToolkit: /usr/local/cuda/targets/aarch64-linux/include (found version "10.2.300")
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 10.2.300
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-10.2/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61;70
-- CUDA host compiler is GNU 8.5.0
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- Configuring done (7.0s)
-- Generating done (0.4s)
-- Build files have been written to: /home/rover/llama.cpp/build
cmake --build . --config Release
[ 1%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 2%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 3%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o
[ 4%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o
[ 5%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o
/home/rover/llama.cpp/ggml-cuda.cu(626): error: identifier "__hmax2" is undefined
/home/rover/llama.cpp/ggml-cuda.cu(5462): error: identifier "__hmax2" is undefined
/home/rover/llama.cpp/ggml-cuda.cu(5474): error: identifier "__hmax" is undefined
/home/rover/llama.cpp/ggml-cuda.cu(5481): error: identifier "__hmax" is undefined
4 errors detected in the compilation of "/tmp/tmpxft_00003545_00000000-10_ggml-cuda.compute_70.cpp1.ii".
CMakeFiles/ggml.dir/build.make:131: recipe for target 'CMakeFiles/ggml.dir/ggml-cuda.cu.o' failed
make[2]: *** [CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
CMakeFiles/Makefile2:697: recipe for target 'CMakeFiles/ggml.dir/all' failed
make[1]: *** [CMakeFiles/ggml.dir/all] Error 2
Makefile:145: recipe for target 'all' failed
make: *** [all] Error 2
—
Reply to this email directly, view it on GitHub
<#3880 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AE7GYLIVU2XQU3WG2IMXUUDYN7X3RAVCNFSM6AAAAAA6YVYQUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBXGI3DGOJZGI>
.
You are receiving this because you commented.Message ID: <ggerganov/llama.
***@***.***>
|
The |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Current Behavior
While attempting to compile llama.cpp I encountered several warnings while compiling the "llama.cpp/ggml-quants.c" file and which are causing a "cc1: some warnings being treated as errors" issue causing the compile to fail.
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
$ uname -a
$ python3 --version
Python 3.7.9
$ cmake --version
cmake version 3.28.20231031-g9c106e3
$ g++ --version
g++ (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Failure Information (for bugs)
Please help provide information about the failure / bug.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Failure Logs
The text was updated successfully, but these errors were encountered: