-
Notifications
You must be signed in to change notification settings - Fork 428
[BUG]: Linux-arm64 crashes with a TypeInitializationException #1192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Arm64 should be supported, althougn it's qutie new and not tested in CI yet so I expect some bugs there! Does it work if you run it from VS (or with |
Should I put the files 's in the same folder they end up in, in the debug build? /runtimes/linux-arm64/native/ |
That looks like the right path. If you add this to the start of your program (before any other NativeLibraryConfig
.All
.WithLogCallback((level, message) => Console.WriteLine($"LL# [{level}]: {message}"); |
Aha, I have experience with this. When using the runtimes path, I need to be on the Windows platform. My Jetson Agx arm64 device requires me to compile the CUDA version myself and copy the SO file to the same directory as the program. |
Could you try running |
I seem to have found the problem. Need to upgrade glibc. My device can run normally as I compiled it myself, but there are indeed issues with the Nuget service, which can be used as a reference: My compiled Cudaagx@ubuntu:/work/gpt$ ldd libllama.so
linux-vdso.so.1 (0x0000ffffb7886000)
libggml.so (0x0000ffffb76b0000)
libggml-base.so (0x0000ffffb75d0000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffb7390000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffb72f0000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffb72c0000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffb7110000)
/lib/ld-linux-aarch64.so.1 (0x0000ffffb784d000)
libggml-cpu.so (0x0000ffffb7040000)
libggml-cuda.so (0x0000ffffb3fa0000)
libgomp.so.1 => /lib/aarch64-linux-gnu/libgomp.so.1 (0x0000ffffb3f40000)
libcudart.so.12 => /usr/local/cuda/lib64/libcudart.so.12 (0x0000ffffb3e70000)
libcublas.so.12 => /usr/local/cuda/lib64/libcublas.so.12 (0x0000ffffac9a0000)
libcuda.so.1 => /usr/local/cuda/compat/libcuda.so.1 (0x0000ffffaad00000)
libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffaace0000)
libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffaacc0000)
librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffaaca0000)
libcublasLt.so.12 => /usr/local/cuda/lib64/libcublasLt.so.12 (0x0000ffff8c530000)
libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000ffff8c4c0000)
libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_mem.so (0x0000ffff8c4a0000)
libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffff8c470000)
libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffff8c450000)
libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_sync.so (0x0000ffff8c430000)
libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffff8c400000)
libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_chip.so (0x0000ffff8c3e0000)
libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_host1x.so (0x0000ffff8c3b0000)
Nugetagx@ubuntu:/work/gpt/so$ ldd libllama.so
./libllama.so: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.38' not found (required by ./libllama.so)
./libllama.so: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.38' not found (required by libggml-base.so)
linux-vdso.so.1 (0x0000ffff8d1f6000)
libggml.so (0x0000ffff8d000000)
libggml-base.so (0x0000ffff8cf20000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffff8cce0000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff8cc40000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffff8cc10000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff8ca60000)
/lib/ld-linux-aarch64.so.1 (0x0000ffff8d1bd000)
libggml-cpu.so (0x0000ffff8c980000)
libgomp.so.1 => /lib/aarch64-linux-gnu/libgomp.so.1 (0x0000ffff8c920000)
|
This situation is indeed quite common in embedded systems, as embedded devices typically run relatively older, stable versions of operating systems. So, it is recommended to compile it yourself. Upgrading glibc may pose greater risks. |
Yes: |
If you don't want to compile, you can also try the CUDA version I compiled first. It should return to the CPU by itself, maybe it can be used. |
If I switch to Ubuntu 24.04 LTS it will have GLIBC_2.39, which I think will be OK, we'll find out |
Ok, now on Ubuntu 24.04, it says libgomp.so.1 => not found The exception is still the same: System.TypeInitializationException: The type initializer for 'LLama.Native.NativeApi' threw an exception. ---> LLama.Exceptions.RuntimeError: The native library cannot be correctly loaded |
So now I'm installing libgomp1, let's see what that does... |
Now it's a hard crash of the API |
These also gave a dump |
When you compiled llama.cpp did you use exactly the commit hash listed here? |
I did not do a compile myself, I did a release build via .NET CLI. This is the core dump:
|
Sorry I misread and thought you'd switched to compiling yourself. If the prebuilt binaries are loading, but not working I think a reasonable next step would be to compile them yourself to see if it produces the same error. |
Uh oh!
There was an error while loading. Please reload this page.
Description
When attempting to run a LLamasharp API on linux-arm64, compiled als a self-contained runtime: I get a runtime error for type initialization, which says the CPU library is missing.
But I'm very sure it's installed. And of course the API works locally on Windows 11, without using a GPU.
Exception:
System.TypeInitializationException: The type initializer for 'LLama.Native.NativeApi' threw an exception. ---> LLama.Exceptions.RuntimeError: The native library cannot be correctly loaded. It could be one of the following reasons: 1. No LLamaSharp backend was installed. Please search LLamaSharp.Backend and install one of them. 2. You are using a device with only CPU but installed cuda backend. Please install cpu backend instead. 3. One of the dependency of the native library is missed
I checked the release build files, for the given API.

To be sure that the cpu lib ggml library is present, which it seems to be:
Is ARM-64 supported? Do I need to install any libraries on the hosting environment?
Reproduction Steps
Run CPU only instance on linux-arm64, it will fail with LLama runtime error: `System.TypeInitializationException.... see above.
Environment & Configuration
Known Workarounds
N/a
The text was updated successfully, but these errors were encountered: