-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Misc. bug: Model not loaded on Android with NDK #13399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There's no result after applying the changes on this PR 13395.
It seems related to not being able to load the Backend. I tried to set different flags and properties to force the CPU as the backend, but I had the same results. 😢 |
You should use the llama.cpp and ggml cmake scripts rather than trying to build your own, there are several details that you may get wrong and lead to issues like this. |
Name and Version
version: b5320
built with macOS Sonoma, Android Studio Meerkat 2024.3.1 Patch 2 and Android NDK 27.012077973
Operating systems
Other? (Please let us know in description), Mac
Which llama.cpp modules do you know to be affected?
libllama (core library)
Command line
Problem description & steps to reproduce
I'm trying to use llama.cpp on Android with local inference using NDK with JNI. When I try to load a model (nomic_embed_text_v1_5_q4_0.gguf) with the "llama_model_load_from_file" method, it does not load and returns null.
CMakeLists.txt
llama_embed.cpp
First Bad Commit
No response
Relevant log output
No log raised in Logcat
The text was updated successfully, but these errors were encountered: