-
Notifications
You must be signed in to change notification settings - Fork 372
bytesFromNibbles error #1
Comments
Hi @faassen! Thanks, I'm aware of the issue. You have to specify the |
I was using Hey, if I cd into But if I then cd back to the project directory and run |
Took me a little bit of time to figure it out. These functions https://github.com/setzer22/llama-rs/blob/main/ggml-raw/ggml/ggml.c#L367-L397 use It can be fixed in this repo's copy of |
Wow, nice catch! @philpax 😄 I wouldn't even know where to begin with this one.
The C way of doing things makes this more complicated than it should. 😬 But anyway, I think adding it as a patch here on our end first would help mitigate the issue as fast as possible, so it's a good thing to do regardless (also to verify the fix actually works). |
Yeah, I have no idea. There are patches landing for the llama.cpp version of ggml, but not ggml itself, which is why I'm thinking that we should track llama.cpp.
Seems reasonable. I'll open a PR here and there at some point. |
Heads up, upstream just fixed this in the I'm now leaning more strongly towards submodule-vendoring in |
Fixed in #32 |
Add zstd compression support for session loading/saving.
When I try to do
cargo run
(but not when I docargo build --release
, I get the following error:this is pretty mysterious as I see
bytesFromNibbles
is actually being defined inggml.c
, though it does use avx stuff, so perhaps that's a problem. Weirdly enough I've successfully run the c++ version on this same laptop (Ryzen 6850u), so this library does seem to compile.The text was updated successfully, but these errors were encountered: