Skip to content

Conversation

ggerganov
Copy link
Member

@ggerganov ggerganov merged commit c8d0d14 into master Aug 28, 2025
51 of 55 checks passed
@ggerganov ggerganov deleted the gg/kv-cache-fix-cont branch August 28, 2025 14:09
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Aug 28, 2025
…nemotron-nano-15409

* origin/master:
ggml : fix SSM_SCAN for n_groups > 1 (ggml-org#15625)
kv-cache : fix find_slot to not search for continuous slot (ggml-org#15638)
model : jina-embeddings-v3 support (ggml-org#13693)
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Aug 28, 2025
…nemotron-nano-15409

* origin/master:
ggml : fix SSM_SCAN for n_groups > 1 (ggml-org#15625)
kv-cache : fix find_slot to not search for continuous slot (ggml-org#15638)
model : jina-embeddings-v3 support (ggml-org#13693)
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Aug 28, 2025
…upport

* origin/master:
ggml : fix SSM_SCAN for n_groups > 1 (ggml-org#15625)
kv-cache : fix find_slot to not search for continuous slot (ggml-org#15638)
model : jina-embeddings-v3 support (ggml-org#13693)

Signed-off-by: Gabe Goodhart <[email protected]>
Minh141120 pushed a commit to menloresearch/llama.cpp that referenced this pull request Aug 29, 2025
Nexesenex added a commit to Nexesenex/croco.cpp that referenced this pull request Oct 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant