-
Notifications
You must be signed in to change notification settings - Fork 1.1k
cuBLAS, GPU compile instructions not working #213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@Free-Radical can you also add the |
Any update? |
Hello, just following up on this issue in case others were wondering about the same thing. I'm using a virtual environment through Anaconda3. When runing the complie instructions from #182 , CMake's find_package() instruction will not look at the correct location where my CUADToolkit is installed. I got to this realization thanks to abetlen's hint above that llama.cpp might be silently skipping CUDA. The relevant code lines are around https://github.com/ggerganov/llama.cpp/blob/7552ac586380f202b75b18aa216ecfefbd438d94/CMakeLists.txt#L180 . To solve the above issue, I added the path to an environment variable called CUDAToolkit_ROOT which seemed to have done the trick. From cmake's documentation https://cmake.org/cmake/help/latest/command/find_package.html?highlight=_ROOT%20environment%20variable , search "root environment variable" for the source of my inspiration.
UPDATE: might want to try installing the right CUDAToolkit also.. I noticed that running the commands from https://anaconda.org/nvidia/cuda-toolkit vs the one from https://anaconda.org/anaconda/cudatoolkit put my headerfiles in different places. The former put them in the location as anticipated by the setup script and the latter required me to set the CUDAToolkit_ROOT env variable. Not sure if anyone else can confirm this finding. |
Did not work,
BLAS = 0
Originally posted by @Free-Radical in #113 (comment)
The text was updated successfully, but these errors were encountered: