Skip to content

Conversation

chiranko
Copy link
Contributor

@chiranko chiranko commented Jan 18, 2024

Hi, @ggerganov

This PR introduces support for CodeShell, a multi-language code LLM with excellent performance of its scale on authoritative code evaluation benchmarks . The objective of CodeShell is to enhance the user experience of code assistants deployed on limited computational resources. It inherently offers support for an older version of llama.cpp. This PR is aimed at synchronizing with the latest version of the code. Tested with https://huggingface.co/WisdomShell/CodeShell-7B

python convert-hf-to-gguf.py CodeShell-7B/ --outfile codeshell-f16.gguf --outtype f16
make && ./main -m codeshell-f16.gguf -p "def fibonacci(n):"
./quantize codeshell-f16.gguf codeshell-q4.gguf q4_0 && ./main -m codeshell-q4.gguf -p "def fibonacci(n):"

@ggerganov ggerganov merged commit 2b3b999 into ggml-org:master Jan 19, 2024
@weiye
Copy link

weiye commented Jan 19, 2024

Thank you for merging our branch. When it's convenient for you, could you please add codeshell to the list of supported models in the README?

@Stypox Stypox mentioned this pull request Jan 23, 2024
4 tasks
jordankanter pushed a commit to jordankanter/llama.cpp that referenced this pull request Feb 3, 2024
* llama: add codeshell support

* llama.cpp: fix codeshell with NeoX rope

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
* llama: add codeshell support

* llama.cpp: fix codeshell with NeoX rope

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants