We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
version: 4957 (053b3f9) built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Linux
llama-cli
$ ./build-debug/bin/llama-cli -m /data/models/Falcon3-Mamba-7B-Instruct/Falcon3-Mamba-7B-Instruct-F16.gguf
My diff that prints tensor shapes :
@@ -4481,6 +4481,8 @@ struct ggml_tensor * ggml_ssm_conv( struct ggml_tensor * sx, struct ggml_tensor * c) { GGML_ASSERT(ggml_is_3d(sx)); + printf("%ld %ld %ld %ld | c = %p\n", c->ne[0], c->ne[1], c->ne[2], c->ne[3], (void*)c); + printf("%ld %ld %ld %ld | sx = %p\n", sx->ne[0], sx->ne[1], sx->ne[2], sx->ne[3], (void*)sx); GGML_ASSERT(ggml_is_matrix(c));
What are the expected shapes here for sx and c ?
sx
c
build: 4957 (053b3f9a) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: llama backend init main: load the model and apply lora adapter, if any llama_model_loader: loaded meta data with 37 key-value pairs and 643 tensors from /data/models/Falcon3-Mamba-7B-Instruct/Falcon3-Mamba-7B-Instruct-F16.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mamba llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Falcon3 Mamba 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Falcon3-Mamba llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = falcon-llm-license llama_model_loader: - kv 8: general.license.link str = https://falconllm.tii.ae/falcon-terms... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Falcon3 Mamba 7B Base llama_model_loader: - kv 11: general.base_model.0.organization str = Tiiuae llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/tiiuae/Falcon3... llama_model_loader: - kv 13: general.tags arr[str,3] = ["falcon3", "falcon3_mamba", "falcon_... llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: mamba.context_length u32 = 1048576 llama_model_loader: - kv 16: mamba.embedding_length u32 = 4096 llama_model_loader: - kv 17: mamba.feed_forward_length u32 = 0 llama_model_loader: - kv 18: mamba.attention.head_count u32 = 0 llama_model_loader: - kv 19: mamba.block_count u32 = 64 llama_model_loader: - kv 20: mamba.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 21: mamba.ssm.inner_size u32 = 8192 llama_model_loader: - kv 22: mamba.ssm.state_size u32 = 16 llama_model_loader: - kv 23: mamba.ssm.time_step_rank u32 = 256 llama_model_loader: - kv 24: mamba.attention.layer_norm_rms_epsilon f32 = 0,000010 llama_model_loader: - kv 25: mamba.ssm.dt_b_c_rms bool = true llama_model_loader: - kv 26: general.file_type u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 28: tokenizer.ggml.pre str = falcon llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,65024] = [">>TITLE<<", ">>ABSTRACT<<", ">>INTR... llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,65024] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,64784] = ["Ġ t", "Ġ a", "i n", "h e", "r e",... llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 8 llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 11 llama_model_loader: - kv 34: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.chat_template str = {{bos_token}}{% for message in messag... llama_model_loader: - kv 36: general.quantization_version u32 = 2 llama_model_loader: - type f32: 385 tensors llama_model_loader: - type f16: 258 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 13,57 GiB (16,03 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 12 load: token to piece cache size = 0,3884 MB print_info: arch = mamba print_info: vocab_only = 0 print_info: n_ctx_train = 1048576 print_info: n_embd = 4096 print_info: n_layer = 64 print_info: n_head = 0 print_info: n_head_kv = 0 print_info: n_rot = 0 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 0 print_info: n_embd_head_v = 0 print_info: n_gqa = 0 print_info: n_embd_k_gqa = 0 print_info: n_embd_v_gqa = 0 print_info: f_norm_eps = 0,0e+00 print_info: f_norm_rms_eps = 1,0e-05 print_info: f_clamp_kqv = 0,0e+00 print_info: f_max_alibi_bias = 0,0e+00 print_info: f_logit_scale = 0,0e+00 print_info: f_attn_scale = 0,0e+00 print_info: n_ff = 0 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = -1 print_info: rope scaling = linear print_info: freq_base_train = 10000,0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 1048576 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 4 print_info: ssm_d_inner = 8192 print_info: ssm_d_state = 16 print_info: ssm_dt_rank = 256 print_info: ssm_dt_b_c_rms = 1 print_info: model type = ?B print_info: model params = 7,27 B print_info: general.name = Falcon3 Mamba 7B Instruct print_info: vocab type = BPE print_info: n_vocab = 65024 print_info: n_merges = 64784 print_info: BOS token = 8 '<|begin_of_text|>' print_info: EOS token = 11 '<|end_of_text|>' print_info: EOT token = 10 '<|im_end|>' print_info: PAD token = 0 '>>TITLE<<' print_info: LF token = 193 'Ċ' print_info: EOG token = 10 '<|im_end|>' print_info: EOG token = 11 '<|end_of_text|>' print_info: max token length = 130 load_tensors: loading model tensors, this can take a while... (mmap = true) 4 1 8192 1 | c = 0x64142dfac6a0 12345 1 6789 1 | sx = 0x64142ef300a0 /soft/llama.cpp/ggml/src/ggml.c:4486: GGML_ASSERT(ggml_is_matrix(c)) failed Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Opération non permise. No stack. The program is not being run. Abandon (core dumped)
The text was updated successfully, but these errors were encountered:
This is possibly related to #5328 . @compilade could you please take a look on this ?
Sorry, something went wrong.
@dodekapod This was caused by #10784 (by @ggerganov) which removed the .squeeze() on the tensors.
.squeeze()
The fix will be to simply use .squeeze() on the SSM_CONV tensor in modify_tensors for Mamba models in the conversion script.
SSM_CONV
modify_tensors
Opened a PR: #12573
But I don't have a model handy to test.
Successfully merging a pull request may close this issue.
Name and Version
version: 4957 (053b3f9)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-cli
Command line
Problem description & steps to reproduce
First Bad Commit
My diff that prints tensor shapes :
What are the expected shapes here for
sx
andc
?Relevant log output
The text was updated successfully, but these errors were encountered: