Skip to content
This repository was archived by the owner on Jun 24, 2024. It is now read-only.

bytesFromNibbles error #1

Closed
faassen opened this issue Mar 14, 2023 · 7 comments
Closed

bytesFromNibbles error #1

faassen opened this issue Mar 14, 2023 · 7 comments

Comments

@faassen
Copy link

faassen commented Mar 14, 2023

When I try to do cargo run (but not when I do cargo build --release, I get the following error:

    /usr/bin/ld: /home/faassen/install/llama-rs/ggml-raw/ggml/ggml.c:1418: undefined reference to `bytesFromNibbles'

this is pretty mysterious as I see bytesFromNibbles is actually being defined in ggml.c, though it does use avx stuff, so perhaps that's a problem. Weirdly enough I've successfully run the c++ version on this same laptop (Ryzen 6850u), so this library does seem to compile.

@setzer22
Copy link
Collaborator

Hi @faassen! Thanks, I'm aware of the issue. You have to specify the --release profile as well for cargo run. That is, cargo run --release. It just doesn't build in debug mode for some reason.

@faassen
Copy link
Author

faassen commented Mar 14, 2023

I was using cargo run --release. If I just type that by itself, it'll crash with the reported error.

Hey, if I cd into llama-rs, the sub-crate, then I don't get a crash, but that's not per instructions.

But if I then cd back to the project directory and run cargo run --release the error has gone away!

@philpax
Copy link
Collaborator

philpax commented Mar 14, 2023

Took me a little bit of time to figure it out. These functions

https://github.com/setzer22/llama-rs/blob/main/ggml-raw/ggml/ggml.c#L367-L397

use inline instead of static inline, which means the compiler is free to assume that there exist other definitions and thus not actually inline them. The linker then attempts to find definitions of the functions, but can't because they were never emitted to begin with. This only occurs in Rust debug builds as cc passes down the debug profile, which the original Makefile doesn't do.

It can be fixed in this repo's copy of ggml, or an upstream fix can be made (to llama.cpp or to ggml?). I suggest opening a PR with upstream and then vendoring it as a submodule.

@setzer22
Copy link
Collaborator

setzer22 commented Mar 14, 2023

Wow, nice catch! @philpax 😄 I wouldn't even know where to begin with this one.

It can be fixed in this repo's copy of ggml, or an upstream fix can be made (to llama.cpp or to ggml?). I suggest opening a PR with upstream and then vendoring it as a submodule.

The C way of doing things makes this more complicated than it should. 😬 ggml is a standalone library, but the author also copied the .c / .h files into the llama.cpp source tree, so I'm not sure which repo we should add a submodule for, or where to report issues to. My go-to place would be the ggml repo itself, not llama.cpp.

But anyway, I think adding it as a patch here on our end first would help mitigate the issue as fast as possible, so it's a good thing to do regardless (also to verify the fix actually works).

@philpax
Copy link
Collaborator

philpax commented Mar 15, 2023

The C way of doing things makes this more complicated than it should. 😬 ggml is a standalone library, but the author also copied the .c / .h files into the llama.cpp source tree, so I'm not sure which repo we should add a submodule for, or where to report issues to. My go-to place would be the ggml repo itself, not llama.cpp.

Yeah, I have no idea. There are patches landing for the llama.cpp version of ggml, but not ggml itself, which is why I'm thinking that we should track llama.cpp.

But anyway, I think adding it as a patch here on our end first would help mitigate the issue as fast as possible, so it's a good thing to do regardless (also to verify the fix actually works).

Seems reasonable. I'll open a PR here and there at some point.

@setzer22 setzer22 mentioned this issue Mar 15, 2023
6 tasks
@philpax
Copy link
Collaborator

philpax commented Mar 16, 2023

Heads up, upstream just fixed this in the llama.cpp version: ggml-org/llama.cpp@113e685

I'm now leaning more strongly towards submodule-vendoring in llama.cpp for as long as we have a dependency on ggml (especially as a way of communicating which version of llama.cpp we match)

@setzer22
Copy link
Collaborator

setzer22 commented Mar 20, 2023

Fixed in #32

philpax added a commit that referenced this issue Mar 26, 2023
Add zstd compression support for session loading/saving.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants