-
Notifications
You must be signed in to change notification settings - Fork 12.7k
llama : minor coding style fix for smollm3 #14605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
compilade
approved these changes
Jul 10, 2025
gabe-l-hart
added a commit
to gabe-l-hart/llama.cpp
that referenced
this pull request
Jul 10, 2025
* origin/master: cmake : do not search for curl libraries by ourselves (ggml-org#14613) SYCL: Initial set_rows kernel implementation (ggml-org#14562) llama : minor coding style fix for smollm3 (ggml-org#14605) cmake : bump llguidance version to v1.0.1 (ggml-org#14609) cmake : llguidance build parser library only (ggml-org#14608) cuda : support Falcon-H1 state size for SSM_SCAN (ggml-org#14602) Signed-off-by: Gabe Goodhart <[email protected]>
olek-tether
pushed a commit
to tetherto/qvac-ext-lib-llama.cpp
that referenced
this pull request
Aug 15, 2025
* sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (#13973) * ggml : do not output unprintable characters on GGUF load failure (#14381) * ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) * ggml-cpu: add nnpa compile flag Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 4a9f60c201573128f73a65999b3e5cc497fae5c1) * ggml-cpu: add fp16->fp32 nnpa first Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 8d4a7987f9c1887f716be96250f2caeee0253929) * ggml-cpu: add fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 0ff0d6516247a41d2ade42b42cf0d676a4dd1627) * ggml-cpu: better variable names Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 2f58bbcbb89c183340e252362b2a40651f573f1f) * docs: update s390x docs Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 01b929491b50071a5d0572235dcf5a449da70aa7) * ggml-cpu: add debugging prints to see if dlf16 is correct Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix print vs printf Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix float placeholder Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: ensure fp16 and fp32 load and stores are called Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fp16 load ensured to hit Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove sigint from fp16 store for some reason, the function is not getting a hit when debugged with gdb. we will need to investigate further Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: nnpa switch to vec_xst test Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to vec_xst for 4 element loops also Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: rework noop Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove noop, general code cleanup Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: clarify variable naming Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add breakpoint for debugging Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: test fix for conversion failure Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: disable fp32->fp16 nnpa conversions for now there are some conversion failures in nnpa that requires the eyes of an ibm stsm. will create a separate pr to introduce the fp32->fp16 change. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to elif macro Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: reattempt fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix typo Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: reattempt fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix compiler types Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: change to typedef vector types Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add 4 element loops for fp32->fp16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: clarified vector naming Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back fp32->fp16 store nnpa Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add nnpa macro check in ggml-impl Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add missing __func__ Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: diagnose why __NNPA__ macro is not being defined Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: import vecintrin.h to fix compiler errors Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: update macro tests Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move s390x typedef to own header file Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: move s390x typedef to own header file" This reverts commit 157f856c34589566151630e294563a420702db39. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to importing ggml-cpu-impl instead Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix macro declaration Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: test more macros Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add debug prints Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bruteforce macro definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move macro definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add ggml-impl.h to cmakelists Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to private macros Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move s390x typedef to own header file Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 157f856c34589566151630e294563a420702db39) * ggml-cpu: move things around Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back compile macros Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: switch to quotes for import Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add compiler error macro Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add s390x detection in ggml-src Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back compile definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: undo cmakelists work Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: move s390x typedef to own header file" This reverts commit 18d79e1a30b39d9aaa0bd58400c5cf2c32135c9a. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove typedefs.h Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove typedef from cmakelists Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add ggml-impl.h future notes Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: add todo comment for future reference Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: clarify naming of dlf16 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove unnecessary target compile definitions Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings Signed-off-by: Aaron Teo <[email protected]> * ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu Signed-off-by: Aaron Teo <[email protected]> * docs: update broken huggingface link for s390x Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix duplicate func names during compile Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: fix duplicate func names during compile" This reverts commit fbb733451f27677063b914d4f6c9a9841d45b38d. Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu" This reverts commit bd288e8fa52b5244f65cee21cb61062f1a9e0ca5. Signed-off-by: Aaron Teo <[email protected]> * ggml: refactor fp16<->fp32 simd to ggml-cpu Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix missing simd-mappings.h import in quants.c Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix missing simd-mappings.h within repack Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix amx mmq missing simd-mappings.h Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: attempt at fixing loongarch failing build Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move nnpa together with other fp16<->fp32 simd Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: fix wrong refactor of ggml-base ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555 Signed-off-by: Aaron Teo <[email protected]> * ggml: remove dependency on ggml-cpu from ggml-base Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: remove mistaken fallback macro fallback logic was already implemented but i was too sleepy to realise Signed-off-by: Aaron Teo <[email protected]> * ggml: move ggml_table_f32_f16 to ggml-cpu ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006 Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures" This reverts commit 32a3533564bdb7902cefb9c89b1c9e956a81ce29. Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml: move ggml_table_f32_f16 to ggml-cpu" This reverts commit 9e40d984ad27d7b60392fb2b7548885201864fe4. Signed-off-by: Aaron Teo <[email protected]> * ggml: move ggml_table_f32_f16 to ggml-cpu ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006 Signed-off-by: Aaron Teo <[email protected]> (cherry picked from commit 9e40d984ad27d7b60392fb2b7548885201864fe4) * ggml: move ggml_table_f32_f16 to ggml-cpu.c Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: extern c ggml_table_f32_f16 + chore docs Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h we rely on the variable declaration in ggml-cpu.c instead Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h" This reverts commit f71b21d2f74f5e03ec0c2b4fefd3cbf395aecf16. Signed-off-by: Aaron Teo <[email protected]> * ggml-cpu: bring back ggml_table_f32_f16 Signed-off-by: Aaron Teo <[email protected]> * Revert "ggml-cpu: bring back ggml_table_f32_f16" This reverts commit 2dce119178bed5ef5c8398c4230ddd14fef80e49. Signed-off-by: Aaron Teo <[email protected]> * fix ggml time initialization * fix f32_f16 table init * remove extra line --------- Signed-off-by: Aaron Teo <[email protected]> Co-authored-by: slaren <[email protected]> * musa: enable fp16 mma (all) and cublas on qy2 (#13842) * musa: enable fp16 mma (all) and cublas on qy2 Signed-off-by: Xiaodong Ye <[email protected]> * Update ggml/src/ggml-cuda/ggml-cuda.cu Co-authored-by: Johannes Gäßler <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]> Co-authored-by: Johannes Gäßler <[email protected]> * docs: update s390x documentation + add faq (#14389) * docs: update s390x documentation + add faq Signed-off-by: Aaron Teo <[email protected]> * docs: add s390x z17 build q&a Signed-off-by: Aaron Teo <[email protected]> --------- Signed-off-by: Aaron Teo <[email protected]> * metal : batch rows copy in a single threadgroup (#14384) * metal : batch rows copy in a single threadgroup ggml-ci * metal : handle some edge cases when threadgroup size is not a power of 2 ggml-ci * metal : add special-case mat-vec mul for ne00 == 4 (#14385) ggml-ci * llama : return mistral-v7-tekken as default template only (#14390) * cmake: regen vulkan shaders when shaders-gen sources change (#14398) * Add shaders-gen sources as target deps * model : gemma3n text-only (#14400) * gemma3n * add llm_graph_input_one * convert : fix broken sentencepiece vocab (#14416) * ggml : add ggml_set_rows (#14274) * ggml : add ggml_set_rows Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using indices from 'c'. ref: #8366 * use I64 for indices * ggml : add repeat impl for i64 * ggml : add ggml_is_contiguous_rows * ggml : ggml_set_rows support broadcast * ggml : ggml_set_rows support quantized dst ggml-ci * ggml : support GGML_TYPE_F32 ".from_float" trait * ggml : ggml_set_rows update comment + better index name * tests : add ggml_set_rows * metal : add ggml_set_rows implementation ggml-ci * ggml : simplify forward_dup_f32 * ggml : fix supports_op * tests : add comment to set_rows * ggml : leave the repeat_i64 for a separate PR ggml-ci * ggml : set_rows use std::min instead of MIN * ggml : better error message for set_rows unsupported type * metal : perform op->type check only once * tests : more consistent implementation + more tests ggml-ci --------- Co-authored-by: Georgi Gerganov <[email protected]> * recurrent : call balloc split_reset() in init_batch() (#14414) ggml-ci * graph : make llm_graph_context destructor virtual (#14410) ggml-ci * vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (#14427) This setting needs to be passed through to vulkan-shaders-gen * ci : fix windows build and release (#14431) * fix async_mode bug (#14432) * model : add support for ERNIE 4.5 0.3B model (#14408) Add Day-0 support for Baidu ERNIE 4.5 0.3B model. Signed-off-by: Weizhao Ouyang <[email protected]> * vulkan: lock accesses of pinned_memory vector (#14333) * vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378) * CUDA: add bf16 and f32 support to cublas_mul_mat_batched (#14361) * CUDA: add bf16 and f32 support to cublas_mul_mat_batched * Review: add type traits and make function more generic * Review: make check more explicit, add back comments, and fix formatting * Review: fix formatting, remove useless type conversion, fix naming for bools * vulkan: Add fusion support for RMS_NORM+MUL (#14366) * vulkan: Add fusion support for RMS_NORM+MUL - Add a use_count to ggml_tensor, so we can detect if an output is used more than once. - Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor. - Add detection logic and basic fusion logic in ggml-vulkan. - Add some testing support for fusion. Rather than computing one node at a time, allow for computing the whole graph and just testing one node's results. Add rms_norm_mul tests and enable a llama test. * extract some common fusion logic * fix -Winconsistent-missing-override * move ggml_can_fuse to a common function * build fix * C and C++ versions of can_fuse * move use count to the graph to avoid data races and double increments when used in multiple threads * use hash table lookup to find node index * change use_counts to be indexed by hash table slot * minimize hash lookups style fixes * last node doesn't need single use. fix type. handle mul operands being swapped. * remove redundant parameter --------- Co-authored-by: slaren <[email protected]> * ggml : implement REGLU/GEGLU/SWIGLU ops (#14158) * implement unary REGLU/GEGLU/SWIGLU cpu ops * relax constraints * duplicate shape of source * fix ggml_vec_geglu_f16 * special case gated ops * implement unary REGLU/GEGLU/SWIGLU cuda ops * tighten constraints again * refactor into GGML_GLU_OP * metal : add glu kernels ggml-ci * add CUDA_GLU_BLOCK_SIZE [no ci] * more constraints and use 64bit ints ggml-ci * 64bit multiplication [no ci] * implement swapped variants (cpu/cuda) * update comment [no ci] ggml-ci * Vulkan: Add GLU ops and shaders * SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate * ggml : implement GLU for split up/gate (#14181) * implement GLU for split up/gate * add tests for ggml_glu_split * Vulkan: Implement glu_split logic and shader support * add split to logging [no ci] * SYCL: refactor element_size ops and add split up and gate support to gated kernels * SYCL: switch GEGLU to use tanh approximation --------- Co-authored-by: 0cc4m <[email protected]> Co-authored-by: Akarshan <[email protected]> * GGML: increase OP count in assertion * Refactor: Optimize SYCL element-wise operations with unary function inlining This commit refactors the SYCL element-wise operations to improve performance by: - Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead. - Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic. - Replacing direct kernel calls with calls to these inlined functions. - Using `__dpct_inline__` to encourage compiler inlining. - Minor code cleanup and consistency improvements. The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices. * vulkan: Increase workgroup size for GLU, for performance (#14345) * vulkan: Increase workgroup size for GLU, for performance * vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup * merge fix * metal : add support for split and swap ggml-ci --------- Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: 0cc4m <[email protected]> Co-authored-by: Akarshan <[email protected]> Co-authored-by: Jeff Bolz <[email protected]> * ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (#14443) * SYCL: disable faulty fp16 exp kernel (#14395) * SYCL: disable faulty fp16 CPU exponent for now * Revert "SYCL: disable faulty fp16 CPU exponent for now" This reverts commit ed0aab1ec31b4eb4b0f275dd7acd41d96a375202. * SYCL: disable faulty fp16 CPU exponent for now * Fix logic of disabling exponent kernel * server : fix appearance of the chats list context menu for Safari (#14322) * server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (#13196) * initial commit for handling extra template kwargs * enable_thinking and assistant prefill cannot be enabled at the same time * can set chat_template_kwargs in command line * added doc * fixed formatting * add support for extra context in generic template init * coding standard: common/chat.cpp Co-authored-by: Georgi Gerganov <[email protected]> * coding standard: common/chat.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Apply suggestions from code review coding standard: cosmetic changes Co-authored-by: Georgi Gerganov <[email protected]> * fix merge conflict * chat.cpp: simplify calls to apply to ensure systematic propagation of extra_context (+ the odd existing additional_context) * normalize environment variable name * simplify code * prefill cannot be used with thinking models * compatibility with the new reasoning-budget parameter * fix prefill for non thinking models --------- Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: Olivier Chafik <[email protected]> * scripts : make the shell scripts cross-platform (#14341) * cmake : Remove redundant include path in CMakeLists.txt (#14452) * Update docker.yml 修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动 * Remove redundant include path in CMakeLists.txt The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths. * Enable scheduled Docker image builds Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date. * test-backend-ops : disable llama test (#14461) * ggml-cpu: sycl: Re-enable exp f16 (#14462) * metal : disable fast-math for some cpy kernels (#14460) * metal : disable fast-math for some cpy kernels ggml-ci * cont : disable for q4_1 ggml-ci * cont : disable for iq4_nl ggml-ci * memory : correctly handle failure in apply() (#14438) ggml-ci * Add Conv2d for CPU (#14388) * Conv2D: Add CPU version * Half decent * Tiled approach for F32 * remove file * Fix tests * Support F16 operations * add assert about size * Review: further formatting fixes, add assert and use CPU version of fp32->fp16 * opencl : add GEGLU, REGLU, SWIGLU (#14456) * ggml-quants : rename best_mad to best_error (ggml/1283) This commit renames the variable `best_mad` to `best_error` in the `make_qkx2_quants` function. The motivation for this is that the name `best_mad` can be somewhat confusing if mean absolute deviation (MAD) is not in use. * ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285) * add "align corners" mode for bilinear upscale, and allow downscaling * add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag * test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners * sync : ggml ggml-ci * ggml : remove trailing whitespace (#0) * add GELU_ERF (#14455) * vulkan: Split large mul_mat_id to fit in shared memory (#14451) * CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (#14411) * [CANN]update to aclnnGroupedMatmulV2 Signed-off-by: noemotiovon <[email protected]> * Support MUL_MAT_ID on 310p Signed-off-by: noemotiovon <[email protected]> * fix editorconfig Signed-off-by: noemotiovon <[email protected]> --------- Signed-off-by: noemotiovon <[email protected]> * Add Vulkan images to docker.md (#14472) Right now it's not easy to find those. * ci : disable fast-math for Metal GHA CI (#14478) * ci : disable fast-math for Metal GHA CI ggml-ci * cont : remove -g flag ggml-ci * ggml : Callback before abort (#14481) * Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed. * Return previous callback to allow callback chaining * style fixes --------- Co-authored-by: Diego Devesa <[email protected]> * github : add OpenCL backend to issue templates (#14492) * ci : add OpenCL to labeler workflow (#14496) * opencl : update upscale to support align corners (#14488) * opencl : skip empty nodes on cgraph compute (#14491) * simple-chat : fix context-exceeded condition (#14494) * simple-chat : fix context-exceeded condition ggml-ci * cont : fix n_ctx_used computation ggml-ci * opencl : fix possible buffer overflow in dump_tensor (#14490) * ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (#14435) ggml-ci * vulkan: support softmax/FA batch and broadcast (#14449) * CUDA: broadcasting for FlashAttention mask (#14500) * CUDA: add softmax broadcast (#14475) * CUDA: add softmax broadcast * Pass by const ref * Review: Use blockDims for indexing, remove designated initializers * Add TODO for noncontigous input/output * Set RPATH to "@loader_path" / "$ORIGIN" to ensure executables and dynamic libraries search for dependencies in their origin directory. (#14309) * ggml : add version function to get lib version (ggml/1286) * ggml : add version function to get lib version This commit adds a function `ggml_version()` to the ggml library that returns the version of the library as a string. The motivation for this is that it can be useful to be able to programmatically check the version of the ggml library being used. Usage: ```c printf("GGML version: %s\n", ggml_version()); ``` Output: ```console GGML version: 0.0.2219 ``` * ggml : add ggml_commit() --------- Co-authored-by: Georgi Gerganov <[email protected]> * sync : ggml ggml-ci * llama : initial Mamba-2 support (#9126) * llama : initial Mamba-2 support * ggml : SIMD ggml_ssm_scan for Mamba-2 * ggml : improve ggml_mul speed when masking recurrent states * llama : support running Mamba-Codestral-7B-v0.1 * llama : fix Mamba-2 conv state saving * ggml : make the ggml_mul fast broadcast path more consistently formatted * llama : remove unused variable * llama : add missing break * convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires workarounds to work correctly. * llama : avoid redundant state copy for Mamba 1 and 2 * metal : attempt to adapt SSM_SCAN for Mamba-2 * metal : fix SSM_SCAN pipeline scope * metal : use log and exp instead of log1pf and expf in SSM_SCAN * metal : remove unused arguments for SSM_SCAN The max index is 31, so trimming the arguments is necessary. * metal : add back n_seqs to SSM_SCAN args Whoops, this is needed for the offset in the concatenated output. * metal : fix SSM_SCAN state head offset * metal : fix wrong number of tokens per sequence in SSM_SCAN * ggml : remove unused fast broadcast path in GGML_MUL This was initially added because states were masked with ggml_mul, but this is no longer done and so this "optimisation" is no longer necessary, or at least not worth the additional code complexity. * ggml : avoid multiply by D in GGML_OP_SSM_SCAN This makes the weight buft detection in src/llama.cpp simpler. * convert : transpose Mamba-2 A, D and reshape SSM_NORM This breaks existing conversions of Mamba-2 models to avoid some reshapes. Not sure if it's a good idea, but it makes the graph slightly cleaner. * llama : more appropriate SSM_SCAN and SSM_CONV buft support checks * convert : fix flake8 lint * metal : fix confusion between ; and , * metal : add missing args for nb references in ssm_scan_f32_group * metal : single-user mamba2 inference works * kv-cache : remove const_cast when setting inputs for s_copy And also fix multi-user inference for recurrent models by using cell_id instead of i as the kv cell index when populating s_copy. * convert : avoid AutoConfig for Mamba and Mamba2 hparams * kv-cache : allow context shift for recurrent models * graph : fix recurrent state copies when avoiding copies Works, but using lambda functions might not be that clean. * ggml : fix mamba2 ssm scan when compiled with SVE * ggml-cpu : reorder SVE FMA for consistency with other SIMD arches * cuda : implement ssm scan for Mamba2 There is still room for improvement, but it works! * cuda : adapt Mamba1 ssm scan to shape changes from Mamba2 * mamba : fix mismatched new and delete size for llm_build_mamba Subclasses of llm_graph_context cannot have extra fields, because the called destructor is not the one from the subclass. This otherwise would cause problems when runnning Mamba-(1|2) inference when compiled -DGGML_SANITIZE_ADDRESS=ON * cuda : graceful fallback for Mamba-1 models with weird embd size * gguf-py : add support for chat template jinja files (#14508) * add support for chat template jinja files * remove gemma3n hack * CUDA: add dynamic shared mem to softmax, refactor general usage (#14497) * ggml : remove kompute backend (#14501) ggml-ci * ggml : fix FA mask dim 2 and 3 (#14505) * ggml : fix FA mask dim 2 and 3 ggml-ci * backends : unsupport batched FA in CUDA and Vulkan ggml-ci * vulkan : disable FA for mask->ne[2] != 1 * kv-cache : use ggml_set_rows (#14285) * kv-cache : use ggml_set_rows ggml-ci * graph : separate k and v indices ggml-ci * cont : remove redundant ifs ggml-ci * kv-cache : improve find_slot impl * kv-cache : bounds-check when accessing slot_info indices * kv-cache : add comments ggml-ci * ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends ggml-ci * convert : correct gemma 3n conversion (#14450) * convert : correct gemma 3n conversion * rm redundant code * Fix conditional enabling following arch checks for ggml-sycl (#14504) Signed-off-by: nscipione <[email protected]> * ggml: backward pass for split swiglu (#14483) * vulkan: support mixed/deepseekR1 FA head sizes (#14509) * vulkan: better parameterize FA by head sizes * vulkan: support mixed/deepseekR1 FA head sizes * opencl : broadcast for soft_max (#14510) * ggml : implement GEGLU_ERF and GEGLU_QUICK ops (#14445) * CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (#14002) Co-authored-by: luyuhong <[email protected]> * batch : add n_used count (#14512) ggml-ci * graph : prepare for 4D mask (#14515) ggml-ci * batch : add optional for sequential equal split (#14511) ggml-ci * metal : disable fast math in all quantize kernels (#14528) ggml-ci * test-backend-ops: add support for specifying output format (#14368) * test-backend-ops: add support for specifying output format Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * Add build_commit and build_number in test_result Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * refactor Signed-off-by: Xiaodong Ye <[email protected]> * Get build commit from ggml_commit() Signed-off-by: Xiaodong Ye <[email protected]> * Merge errors into test_operation_info && address review comments Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> * remove visitor nonsense * remove visitor comment Signed-off-by: Xiaodong Ye <[email protected]> * Address review comments Signed-off-by: Xiaodong Ye <[email protected]> --------- Signed-off-by: Xiaodong Ye <[email protected]> Co-authored-by: slaren <[email protected]> * eval-callback : check for empty input (#14539) * opencl: add GELU_ERF (#14476) * server : fix assistant prefilling when content is an array (#14360) * vulkan: Handle updated FA dim2/3 definition (#14518) * vulkan: Handle updated FA dim2/3 definition Pack mask boolean and n_head_log2 into a single dword to keep the push constant block under the 128B limit. * handle null mask for gqa * allow gqa with dim3>1 * vulkan: fix rms_norm+mul fusion (#14545) The fused operation was grabbing the epsilon value from the wrong place. Add an env var to disable fusion. Add some missing checks for supported shapes/types. Handle fused rms_norm+mul in check_results. * vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (#14485) Commit taken from remyoudompheng's PR https://github.com/ggml-org/llama.cpp/pull/12260 Co-authored-by: Rémy Oudompheng <[email protected]> * CUDA: add bf16 and i32 to getrows (#14529) * llama : remove ggml_cont where possible (#14568) * llama : fix incorrect minicpm3 v_states shape (#14571) * musa: fix build warnings (unused variable) (#14561) Signed-off-by: Xiaodong Ye <[email protected]> * CUDA: add bilinear interpolation for upscale (#14563) * cuda : fix rope with partial rotation and non-cont src (#14580) * cuda : fix rope non-cont ggml-ci * cont : fix multi-rope + add test ggml-ci * sycl : try fix ggml-ci * cont : fix sycl + clean-up cuda ggml-ci * vulkan: increase timeout for CI (#14574) * model : add hunyuan moe (#14425) * model : add hunyuan moe * tokenizer ok * fix tensor name * cgraph init * chat template * wip * almost working * skip embed, fix bos * cleanup * yarn scaling * cleanup * correct rope type * failed token fix * ntk alpha freq_base * tokenization working * cleanup and pr changes * vocab_size sanity check * ntk alpha generic * Update convert_hf_to_gguf.py * Apply suggestions from code review * fix regression * fix style --------- Co-authored-by: kooshi <[email protected]> * server: Add ability to mount server at prefix (#14544) * Add server_prefix * Correct server path env * Rename cli flag to --api-prefix * Change all to api_prefix * vulkan : fix rope with partial rotation and non-cont src (#14582) * memory : fix broken batch splits for recurrent cache (#14575) Splits producing more than one ubatch per batch for recurrent models were broken with #14512. This fixes it by moving the completeness check after the ubatch split loop. * model : add SmolLM3 (#14581) * Init - first pass. * Model -> ModelBase. * fix errors in conversion. * Update the graph. * up. * up. * wip * cgraph ok * rm redundant code --------- Co-authored-by: Vaibhavs10 <[email protected]> * model : fix hunyuan moe chat template (#14584) Signed-off-by: stevenkuang <[email protected]> * vulkan: optimize flash attention split_k_reduce (#14554) * vulkan: allow FA split_k with smaller KV values * vulkan: spread split_k_reduce work across more threads k_num can get rather large. Use the whole workgroup to reduce the M/L values. Launch a thread for each element in the HSV dimension of the output. Helps a lot for large HSV (like deepseek). * convert : fix smollm3 jinja template (#14586) * model : add support for Falcon-H1 family (#14534) * v1 * push more fixes * another fix * fix * more fixes * minor fix * more cleaning on python code * python fixes * changed precision for multipliers float 32->64 * fixes * another fix * fix * pre-norm -> norm * fix * Revert "fix" This reverts commit 243e4d1a50bd73467d99f6b289b9a1826f83b94b. * fix * small fix ffn_norm * try * mix instead of max * fix vocab size * conflict solve * fixed multipliers * falcon-h1 specefic vocab resolved * read arch from gguf.MODEL_ARCH * mamba_d_ssm added to d_inner find_hparam * remove unused functions from gguf_writer.py * override modify_tensors instead of get_tensors * fix conversion and d_inner * added some cb functions for debugging puposes * inp_out_ids moved outside of layers loop * mup_vec create as float64 * fix rope_theta * injected mup * clean ups * rm extra space * rm unused MAMBA_CHUNK_SIZE * rm unused key * add bos False * changed ROPE_TYPE * cleaning debugging stuff * cleaning debug quant * fix comment * some cleanups * some cleanups * Update src/llama-model-loader.cpp * more cleanups * moe cleanuips * d_ssm -> d_inner; * cleaning unused hparams * cleanup * more cleanups * more cleanups on python conversion; * minor cleanups * Apply suggestions from code review Co-authored-by: Georgi Gerganov <[email protected]> * remove todo * added falcon-h1 * tensor not required * clean * remove unneeded attributes * more cleanups and fixed conversion * remove final_norm * flake8 fixes * Update src/llama-model.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * flake8 fixes * Update src/llama-hparams.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-model.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-model.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-arch.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * added hashes * Update src/llama-arch.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Update src/llama-vocab.cpp Co-authored-by: Georgi Gerganov <[email protected]> * update the update file * Revert "update the update file" This reverts commit 082ab4ad2a3927384d878666a5f8cae4eb15f577. * fix: address suggestions * fix: update convert_hf_to_gguf.py * Update gguf-py/gguf/constants.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-model-loader.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * d_inner fixed * Update src/llama-model.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * reshaping ssm_norm for 34B * removing generate_mup * remove duplicates metadata keys * rm comment * final comment * fix unused args * fix constants * fix bad merge * Update src/llama-model.cpp Co-authored-by: compilade <[email protected]> * falcon-h1: remove unused ssm_in_b and bad merge * Update src/llama-model.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * falcon-h1: fix last comment * Update convert_hf_to_gguf.py Co-authored-by: compilade <[email protected]> * falcon-h1: revert add_add_bos(False) * falcon-h1: fix tied weights * falcon-h1: remove whitespace * falcon-h1: fix wrong size param * falcon-h1: fix whitespace issues --------- Co-authored-by: younesbelkada <[email protected]> Co-authored-by: Younes B <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]> Co-authored-by: compilade <[email protected]> * llama : remove unintended whitespace (#14592) * model : add skt/A.X-4.0 model vocabulary (#14589) * ggml : prevent integer overflow in gguf tensor size calculation (#14595) * ggml : add ggml_scale_bias (#14417) * ggml : add ggml_scale_bias * ggml_vec_mad1_f32 * add more simd * add CUDA * sycl * vulkan * cann (placeholder) * opencl * will this fix cpu? * fix cuda * suggestions from coderabbit * fix cann compile error * vDSP_vsmsa * rm __ARM_FEATURE_SVE * use memcpy for op params * make code looks more consistent * use scalar for __ARM_FEATURE_SVE * add x param to ggml_vec_mad1_f32 * llama : support Jamba hybrid Transformer-Mamba models (#7531) * wip: llama : separate recurrent states from the KV cache This will be necessary to support Jamba (and other recurrent models mixed with Attention). Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states. * llama : use std::find for seq_nodes in llama_rs_cache * llama : state checkpoints for recurrent models * llama : correctly handle more edge cases for the rs cache * llama : rename many llama_kv_cache_* functions * llama : remove useless return value for some llama_cache_* functions * llama : rethink recurrent state cell counts * llama : begin work on support for variable GQA This will also be useful for Jamba if we consider the Mamba layers to have 0 KV heads. * llama : gracefully fail when not finding hybrid slot * llama : support Jamba * llama : fix BERT inference without KV cache * convert-hf : check for unprocessed Jamba experts * convert-hf : support Mini-Jamba conversion * llama : fix Jamba quantization sanity checks * llama : sequence-length-aware batch splitting * llama : use equal-sequence-length sub-batches for recurrent models * ggml : simplify SSM-related operators * llama : make recurrent state slot allocation contiguous * llama : adapt internal uses of batches to llama_ubatch * llama : fix batch split output count for embeddings * llama : minimize swaps when reordering logits This reduces overhead when running hellaswag on thousands of sequences with very small 100k params Mamba models. * llama : fix edge case finding batch seq_id of split recurrent cell This otherwise was a problem when running the HellaSwag benchmark with small batch sizes, making it crash. * llama : avoid copies for simple batch splits * ggml : make ggml_ssm_scan not modify its source tensors * llama : fix shared recurrent tail cell count for small ubatch sizes Otherwise it was impossible to run the 'parallel' example with '-ub 1' with a Mamba or Jamba model. * llama : fix .base() compilation error on Windows * llama : allow doing the equivalent of SSM_CONV with SUM_ROWS and MUL * ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors The implementation already supported it, and this makes Mamba's conv step slightly faster. * mamba : fix non-contiguous usage of ggml_silu * llama : session saving and reloading for hybrid models * convert_hf : fix Jamba conversion * llama : fix mixed signedness comparison * llama : use unused n_embd_k_gqa in k_shift This also slightly reduces the diff from the master branch * llama : begin renaming llama_past back to llama_kv_cache * llama : remove implicit recurrent state rollbacks * llama : partially apply clang-format style * convert : fix jamba conv1d shape squeezing * graph : add back hybrid memory graph input But this time it contains the sub-cache graph inputs. This *should* make it easier to handle updating the inputs when caching the graph (eventually). * model : add Jamba to Mamba-specific hparams printing * jamba : remove redundant nullptr initializations * model : remove unnecessary prefix for tensor loading constants Co-authored-by: Sigbjørn Skjæret <[email protected]> * model : use ggml_swiglu_split for Mamba Co-authored-by: Sigbjørn Skjæret <[email protected]> * model : make falcon-h1 use shared mamba2 layer builder * memory : avoid referring to KV in recurrent cache logs * gguf-py : avoid adding duplicate tensor mappings for Jamba Some of the tensor names are common with Llama4 --------- Co-authored-by: Sigbjørn Skjæret <[email protected]> * llama : remove llm_graph_input_one (#14603) * cuda : support Falcon-H1 state size for SSM_SCAN (#14602) * cmake : llguidance build parser library only (#14608) * cmake : bump llguidance version to v1.0.1 (#14609) * llama : minor coding style fix for smollm3 (#14605) * SYCL: Initial set_rows kernel implementation (#14562) * SYCL: Initial set_rows kernel implementation * Revert max_threads to 256 * Refactor set_rows and address review comments * Deduplicate conversion function * Remove guard before kernel launch and refactor * Fix and add back SFINAE * cmake : do not search for curl libraries by ourselves (#14613) * cmake : do not search for curl libraries by ourselves * run : do not search for curl libraries by ourselves * Docs: script to auto-generate ggml operations docs (#14598) * Docs: script to auto-generate ggml operations docs * Review: formatting changes + change github action * Use built-in types instead of typing * docs : add BLAS and Metal ops --------- Co-authored-by: Georgi Gerganov <[email protected]> * Smoldocling support (#14597) * support for smoldocling * fixed merge conflicts * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Gabe Goodhart <[email protected]> * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Gabe Goodhart <[email protected]> * merge conflicts * pre tokenizer merge fix * convert : fix smollm3 jinja template (#14586) Signed-off-by: ryan-mangeno <[email protected]> * support for smoldocling Signed-off-by: ryan-mangeno <[email protected]> * fixed merge conflicts Signed-off-by: ryan-mangeno <[email protected]> * Update src/llama-vocab.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-model.h Co-authored-by: Sigbjørn Skjæret <[email protected]> * safetensors tensor mapping Signed-off-by: ryan-mangeno <[email protected]> * added back accidental removal of clean spaces for hunyuan * Update src/llama-vocab.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * updated hash and reordererd model list * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-vocab.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update include/llama.h Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update convert_hf_to_gguf_update.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update src/llama-vocab.cpp Co-authored-by: Sigbjørn Skjæret <[email protected]> * removed old tensor name * removed tensor mappings -> handled by smolvlm * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <[email protected]> * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <[email protected]> --------- Signed-off-by: ryan-mangeno <[email protected]> Co-authored-by: Gabe Goodhart <[email protected]> Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]> Co-authored-by: compilade <[email protected]> * opencl: add `set_rows` for `f16` and `f32` (#14547) * opencl: add `set_rows` for `f16` and `f32` * opencl: better choose workgroup size for `set_rows` * opencl: add tiled mul_mat_f16_f32 (#14535) * add tiled mul_mat_f16_f32 * fix trailing whitespace * add insightful comments * model : Granite Four (#13550) * wip: llama : separate recurrent states from the KV cache This will be necessary to support Jamba (and other recurrent models mixed with Attention). Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states. * llama : use std::find for seq_nodes in llama_rs_cache * llama : state checkpoints for recurrent models * llama : correctly handle more edge cases for the rs cache * llama : rename many llama_kv_cache_* functions * llama : remove useless return value for some llama_cache_* functions * llama : rethink recurrent state cell counts * llama : begin work on support for variable GQA This will also be useful for Jamba if we consider the Mamba layers to have 0 KV heads. * llama : gracefully fail when not finding hybrid slot * llama : support Jamba * llama : fix BERT inference without KV cache * convert-hf : check for unprocessed Jamba experts * convert-hf : support Mini-Jamba conversion * llama : fix Jamba quantization sanity checks * llama : sequence-length-aware batch splitting * llama : use equal-sequence-length sub-batches for recurrent models * ggml : simplify SSM-related operators * llama : make recurrent state slot allocation contiguous * llama : adapt internal uses of batches to llama_ubatch * llama : fix batch split output count for embeddings * llama : minimize swaps when reordering logits This reduces overhead when running hellaswag on thousands of sequences with very small 100k params Mamba models. * llama : fix edge case finding batch seq_id of split recurrent cell This otherwise was a problem when running the HellaSwag benchmark with small batch sizes, making it crash. * llama : avoid copies for simple batch splits * llama : use im2col and mul_mat to perform convolution for Mamba This removes the need for ggml_ssm_conv!!! But performance seems slighly worse on my system, especially for prompt processing. Maybe ggml_mul_mat isn't optimized for small row sizes? More performance testing is necessary until GGML_OP_SSM_CONV is removed. * ggml : make ggml_ssm_scan not modify its source tensors * llama : fix shared recurrent tail cell count for small ubatch sizes Otherwise it was impossible to run the 'parallel' example with '-ub 1' with a Mamba or Jamba model. * llama : fix .base() compilation error on Windows * llama : allow doing the equivalent of SSM_CONV with SUM_ROWS and MUL * ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors The implementation already supported it, and this makes Mamba's conv step slightly faster. * llama : rename llama_cache to llama_past This can be changed back later if the name change is wrong. I was renaming the functions anyway to generalize kv-cache-related functions to hybrid and recurrent model architectures. I think llama_past is a better name than llama_cache for a combined kv cache and recurrent state cache, because the states it contains pretty much always come before the newly-added ones for any particular sequence. Also 'llama_past_clear' sounds more obvious in what it does than 'llama_kv_cache_clear'. The future is what the models generate. (For embeddings, the kv cache isn't really used anyway) Still, I'm open to better suggestions. * examples : replace llama_kv_cache_seq_* with llama_past_seq_* * mamba : fix non-contiguous usage of ggml_silu * llama : initial Mamba-2 support * ggml : SIMD ggml_ssm_scan for Mamba-2 * ggml : improve ggml_mul speed when masking recurrent states * llama : support running Mamba-Codestral-7B-v0.1 * llama : fix Mamba-2 conv state saving * ggml : make the ggml_mul fast broadcast path more consistently formatted * llama : remove unused variable * llama : add missing break * convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires workarounds to work correctly. * llama : session saving and reloading for hybrid models * convert_hf : fix Jamba conversion * llama : fix mixed signedness comparison * llama : use unused n_embd_k_gqa in k_shift This also slightly reduces the diff from the master branch * llama : begin renaming llama_past back to llama_kv_cache * llama : avoid redundant state copy for Mamba 1 and 2 * metal : attempt to adapt SSM_SCAN for Mamba-2 * metal : fix SSM_SCAN pipeline scope * metal : use log and exp instead of log1pf and expf in SSM_SCAN * metal : remove unused arguments for SSM_SCAN The max index is 31, so trimming the arguments is necessary. * metal : add back n_seqs to SSM_SCAN args Whoops, this is needed for the offset in the concatenated output. * metal : fix SSM_SCAN state head offset * metal : fix wrong number of tokens per sequence in SSM_SCAN * ggml : remove unused fast broadcast path in GGML_MUL This was initially added because states were masked with ggml_mul, but this is no longer done and so this "optimisation" is no longer necessary, or at least not worth the additional code complexity. * ggml : avoid multiply by D in GGML_OP_SSM_SCAN This makes the weight buft detection in src/llama.cpp simpler. * convert : transpose Mamba-2 A, D and reshape SSM_NORM This breaks existing conversions of Mamba-2 models to avoid some reshapes. Not sure if it's a good idea, but it makes the graph slightly cleaner. * llama : more appropriate SSM_SCAN and SSM_CONV buft support checks * convert : fix flake8 lint * llama : remove implicit recurrent state rollbacks * llama : partially apply clang-format style * metal : fix confusion between ; and , * metal : add missing args for nb references in ssm_scan_f32_group * metal : single-user mamba2 inference works * kv-cache : remove const_cast when setting inputs for s_copy And also fix multi-user inference for recurrent models by using cell_id instead of i as the kv cell index when populating s_copy. * convert : avoid AutoConfig for Mamba and Mamba2 hparams * kv-cache : allow context shift for recurrent models * graph : fix recurrent state copies when avoiding copies Works, but using lambda functions might not be that clean. * ggml : fix mamba2 ssm scan when compiled with SVE * ggml-cpu : reorder SVE FMA for consistency with other SIMD arches * cuda : implement ssm scan for Mamba2 There is still room for improvement, but it works! * cuda : adapt Mamba1 ssm scan to shape changes from Mamba2 * feat: Add conversion for Bamba models This is borrowed and adapted from the original implementation https://github.com/ggml-org/llama.cpp/pull/10810 Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add Granite 4 conversion This is a manual copy from my draft branch https://github.com/gabe-l-hart/llama.cpp/blob/GraniteFourDraft/convert_hf_to_gguf.py#L5076 Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Plumb bamba through llama-arch Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add bamba to llama_arch_is_hybrid_recurrent Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add optional mamba ssm_in bias tensor Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add template specialization for get_arr to load a vector<uint32_t> for layer index arr in hparams Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Use an explicit bool to determine mamaba vs mamba2 This allows other architectures like bamba and granitemoehybrid to use mamab2 without a growing architecture `if` statement inside the mamba implementation. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Isolate mamba(2) and granite attention layer building in static methods This will allow these layer-builder methods to be used from other build structs without complex inheritance. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Use per-layer sizes in granite build_attention_layer Also no need to pass in kv cache since it's already in the inp_attn Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: First (broken) pass at end-to-end Bamba implementation It generates (garbage) tokens! Still lots of debugging to do. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Only do Granite multipliers if set Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Pull granite ffn portion into a static function and reuse in hybrid Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat(py): Allow gguf duplicate keys if they match by value and type This is helpful for hybrid models that want to do gguf param setting by calling multiple parent classes without needing to make those parent classes try/except on every attempt to set a gguf value. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor(py): Simplify granitemoehybrid conversion to use parents better Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add GRANITE_MOE_HYBRID through llama-arch Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Support GRANITE_MOE_HYBRID in llama-model This re-uses the Bamba code paths heavily and simply adds the missing parts for loading MoE and the shared expert. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * style: Fix flake8 errors Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix recurrent cache get after rebase Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix hybrid granite implementation for signature changes in build_mamba*_layer Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Refactor relationship between non-hybrid classes and hybrid impl to use mixins The challenge here is to give both the non-hybrid classes (llm_build_mamba and llm_build_granite) AND the hybrid class (llm_build_hybrid_mamba) access to the same intermediate "base class" functionality (build_mamba*_layer, build_granite_attention_layer) without running into trouble with diamond inheritance of llm_graph_context. Due to the non-trivial initialization that happens in llm_graph_context, diamond inheritance results in multiple initializations of the common base which cause problems around the unique ptrs. I wanted to get away from `self->` everywhere, but this is still a bit cleaner than making those methods static I think. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Implement the full copy-paste version to duplicate the layer builders This follows the pattern where the type of input is pinned to the type of memory and that is used to dispatch to the correct version of `build_rs` / `build_attn`. There's a lot of code duplication that can hopefully be pulled into common functions in the graph later. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Rename llm_build_hybrid_mamba -> llm_build_granite_hybrid I've got back-and-forth a lot about how/if to try to implement reuse of the "child model" layer types for hybrid models. At the end of the day, I think hybrid models are their own beast and even if their layers are inspired by other models, they should maintain control of their own layer building (in other words, the copy-paste method). Given that, the name should reflect that this is not a generic hybrid model builder, but rather a granite- specific hybrid model builder that can do MoE (granite 4) or dense (bamba). As part if this, I also cleaned up dangling comments from previous attempts at using static methods for reusability. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * mamba : fix mismatched new and delete size for llm_build_mamba Subclasses of llm_graph_context cannot have extra fields, because the called destructor is not the one from the subclass. This otherwise would cause problems when runnning Mamba-(1|2) inference when compiled -DGGML_SANITIZE_ADDRESS=ON * memory : correctly handle failure in apply() ggml-ci * style: Remove TODO for adding first hybrid models to the switch Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix bad merge in tensor_mapping.py w/ SSM_NORM Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Fix bad merge resolution with variable renames/moves in llm_build_mamba Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * docs: Fix comment about duplicate key check Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Conform to standard way of initializing inp_out_ids Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * convert : fix jamba conv1d shape squeezing * fix: Fix input initialization in granite_hybrid after removal of hybrid inputs Branch: GraniteFourWithJamba Signed-off-by: Gabe Goodhart <[email protected]> * fix: Use llm_graph_context_mamba in llm_build_granite_hybrid Branch: GraniteFourWithJamba Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Refactor mamba2/granite/jamba/granite_hybrid relationships as mixins The key is for the mixin classes (llm_graph_context_mamba, llm_graph_context_granite) to use virtual inheritance from llm_graph_context. This allows the common members to exist only once in the class hierarchy. The downside is that llm_graph_context will be re-initialized once for each parent (ie 2x for single mixin, 3x for two mixins, etc...). Branch: GraniteFourWithJamba Signed-off-by: Gabe Goodhart <[email protected]> * graph : add back hybrid memory graph input But this time it contains the sub-cache graph inputs. This *should* make it easier to handle updating the inputs when caching the graph (eventually). * model : add Jamba to Mamba-specific hparams printing * fix: Fix input setup after upstream merge Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * jamba : remove redundant nullptr initializations * model : remove unnecessary prefix for tensor loading constants Co-authored-by: Sigbjørn Skjæret <[email protected]> * model : use ggml_swiglu_split for Mamba Co-authored-by: Sigbjørn Skjæret <[email protected]> * feat: Add support for dense FFN in GraniteMoeHybrid This was already partially supported via reusing the granite ffn builder, and there may be models that leverage this architecture going forward. The naming is a bit odd, but in the transformers version, it reuses the same model class and simply has zero regular experts and a single shared expert (which is the same as a single dense FFN). Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Add support for dense FFN tensor names on c++ side Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Use child inputs for Falcon H1 after merge resolution Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove unnecessary prefix on tensor constants Signed-off-by: Gabe Goodhart <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]> * model : make falcon-h1 use shared mamba2 layer builder * memory : avoid referring to KV in recurrent cache logs * fix: Revert order changes for Falcon H1 to stay consistent with upstream Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * gguf-py : avoid adding duplicate tensor mappings for Jamba Some of the tensor names are common with Llama4 * refactor: Collapse Bamba and GraniteMoeHybrid into GraniteHybrid The only key difference is the use of rope which is now set via rope_finetuned in the hparams Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Remove use of diamond inheritance Per PR discussion, it's simpler to keep this with basic inheritance and not introduce the complexity of virtual inheritance and multiple inheritance https://github.com/ggml-org/llama.cpp/pull/13550#issuecomment-3053787556 Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * feat: Log mamba params for Granite Hybrid Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove unused ssm_in_b Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * refactor: Remove ATTENTION_LAYER_INDICES hparam in favor of n_head_kv This matches how recurrent vs attention heads are identified for Jamba Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove unused template expansion for get_arr Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Review cleanup in convert_hf_to_gguf The gist is to be explicit about which base class is being used with the multiple inheritance setup Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Undo hidden warnings about duplicate identical keys in add_key_value After further discussion, this encourages sloppy overwriting in the model converters Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: If not using ROPE, context is "infinite" Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * doc: Add a comment outlining expected duplicate key warnings Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> * fix: Remove unnecessary duplicate keys in converter Co-authored-by: Francis Couture-Harpin <[email protected]> (thanks for the sharp eyes and patience!) Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]> --------- Signed-off-by: Gabe Goodhart <[email protected]> Co-authored-by: Francis Couture-Harpin <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]> * vocab : add midm-2.0 model pre-tokenizer (#14626) * llama : move enum llama_vocab_pre_type to implementation (#14631) ggml-ci * readme : add hot PRs (#14636) * readme : add hot PRs * cont * readme : update title * readme : hot PRs links * cont * HIP : Add HIP 7.0+ compatibility for hipBLAS compute types (#14634) * model : support LiquidAI LFM2 hybrid family (#14620) **Important** LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340)into transformers, but has not yet been released. To convert into gguf, install transformers from source ```shell pip install "transformers @ git+https://github.com/huggingface/transformers.git@main" ``` * vulkan: optimizations for deepseek prompt processing (#14555) * vulkan: allow unclamped loads in coopmat2 mul_mat_id shader * vulkan: increase coopmat2 mul_mat_id tile size * vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path * vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits) * vulkan: support SET_ROWS (#14587) * vulkan: support SET_ROWS Add variants of the copy_to_quant shader that do the SET_ROWS operation. Change these shaders to spread the work across the workgroup. The memory access pattern is probably not great (one thread per quant block), but should be fine for now. * vulkan: optimize set_rows Larger workgroups for non-quant types. Set "norepeat" (there is manual repeat logic). Use fastmod. * server : fix pooled embedding output (#14645) * vulkan : implement ggml_roll (ggml/1290) ggml-ci * vulkan : implement bilinear interpolation (ggml/1291) ggml-ci * sync : ggml ggml-ci * vulkan : remove unused vars (#0) ggml-ci * sync : ggml * CUDA: add set rows for f32 and f16 (#14551) * CUDA: add set rows for f32 and f16 * Review: change kernel params, use strides from host * Use 1-d kernel * Review: use int64_t for blockDim.x, rename nb->s for clarity * docs : add LFM2 to models section (#14650) * readme : add LFM2 to models section * fix copy paste... * tests : cover lfm2 cases in test_ssm_conv (#14651) * cmake : Add CMake presets for Linux and GCC (#14656) * metal : Add missing unary ops Metal support (#14660) * ggml : add build-time message to remind about ggml_set_rows (#14661) ggml-ci * cuda : add ELU support (#14657) * cuda : add set rows for bf16 (#14664) * quantize : fix minor logic flaw in --tensor-type (#14572) * llama : add jinja template for rwkv-world (#14665) * llama : add jinja template for rwkv-world Signed-off-by: Molly Sophia <[email protected]> * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <[email protected]> --------- Signed-off-by: Molly Sophia <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]> * sycl: Batched mulmat rework for oneDNN dispatch (#14617) * SY…
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix a minor inconsistent coding style from #14581