forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 457
Issues: LostRuins/koboldcpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Can't get plamo-13b to run under any circumstances. Is it not supported?
#1506
opened Apr 27, 2025 by
Alexamenus
System Info differs from llama.cpp: AVX2 = 0, F16C = 0, FMA = 0
#1505
opened Apr 26, 2025 by
vbooka1
null values for parameters causes an error with OpenAI Compatible API
#1503
opened Apr 26, 2025 by
hronoas
CUDA error: function failed to launch on multi GPU in ggml-cuda during matrix multiplication
#1497
opened Apr 21, 2025 by
riunxaio
Request to merge Microsoft/BitNet functionality, as it is fast, up-to-date, and necessary
#1491
opened Apr 19, 2025 by
windkwbs
--cli mode causes koboldcpp to close instantly or experience an error once input is sent.
#1482
opened Apr 14, 2025 by
wildwolf256
GGML_ASSERT(cgraph->n_nodes < cgraph->size) failed - New Version
#1481
opened Apr 14, 2025 by
DerRehberg
Gemma 3 + mmproj + flashattention falls back to CPU decoding when using --quantkv
#1473
opened Apr 8, 2025 by
vlawhern
CUDA Error: an illegal memory access was encountered
bug
Something isn't working
#1469
opened Apr 7, 2025 by
jodleif
[Feature Request] Add a option to load .mmproj files as CPU-only.
#1454
opened Mar 29, 2025 by
Teramanbr
Error loading Qwen2.5-VL-32B-Instruct vision model: OSError [WinError -529697949]
#1451
opened Mar 29, 2025 by
syzu111
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.