Skip to content

Conversation

jahfer
Copy link
Contributor

@jahfer jahfer commented Apr 29, 2024

👋🏼 About a month ago, llama.cpp updated their CUDA build command to LLAMA_CUDA=1 make rather than the prior LLAMA_CUBLAS=1 make (ggml-org/llama.cpp#6299).

This PR adds --with-cuda as a build option to the extconf.rb, while maintaining the previous --with-cublas flag to prevent breaking existing users.

@yoshoku
Copy link
Owner

yoshoku commented May 2, 2024

@jahfer
Thank you for your contribution. This project adopts the conventional commits, please amend the commit message to something like "chore: update flags to stay consistent with llama.cpp".
https://github.com/conventional-changelog/commitlint/tree/master/%40commitlint/config-conventional

@jahfer jahfer force-pushed the llama_cuda-flag branch from e439dca to 478b500 Compare May 3, 2024 11:32
@jahfer
Copy link
Contributor Author

jahfer commented May 3, 2024

Oops, thanks for the details. I've pushed the amended commit.

@yoshoku yoshoku merged commit 1180f45 into yoshoku:main May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants