Skip to content

Conversation

danbev
Copy link
Member

@danbev danbev commented Aug 14, 2025

This commit adds support for the 18-layer model type in the Gemma3 series, which is the size of the Gemma3-270m model.

The motivation for this commit is was the only change required for Gemma3-270m to be converted to GGUF format and used with llama.cpp.

Once the model has been converted and uploaded to Huggingface it can be used like this:

$ ./build/bin/llama-cli -hf ggml-org/gemma-3-270m-GGUF:Q8_0

This commit adds support for the 18-layer model type in the Gemma3
series, which is the size of the Gemma3-270m model.

The motivation for this commit is was the only change required for
Gemma3-270m to be converted to GGUF format and used with llama.cpp.

Once the model has been converted and uploaded to Huggingface it can be
used like this:
```console
$ ./build/bin/llama-cli -hf ggml-org/gemma-3-270m-GGUF:Q8_0
```
@danbev danbev requested a review from ggerganov August 14, 2025 15:33
@danbev danbev merged commit 7a0de96 into ggml-org:master Aug 14, 2025
46 of 47 checks passed
@danbev danbev deleted the gemma-3-270m branch August 14, 2025 16:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants