-
Notifications
You must be signed in to change notification settings - Fork 1.1k
ValueError: Model path does not exist #226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
That looks like a Windows path but you seem to be on Linux? |
i use WSL |
In wsl you need to use the path where windows file system is mounted. For example "/mnt/c/Users/..." to access the model in windows file system |
I use relative path |
And? |
I have the same issue |
Can you review #225, please? |
Closing. Reopen if the issue is not resolved. |
I have this issue can you help me solve it: |
Please i need help. I have this issue when i try to run vicuna-13b model.
root@DESKTOP-EEQG16M:/mnt/f/AI_and_data/LLAMA_models# python3 test_LLama_cpp.py
Traceback (most recent call last):
File "test_LLama_cpp.py", line 2, in
llm = Llama(
File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama.py", line 155, in init
raise ValueError(f"Model path does not exist: {model_path}")
ValueError: Model path does not exist: F:\AI_and_data\LLAMA_models\llama.cpp\models\ggml-vicuna-13b-4bit.bin
Exception ignored in: <function Llama.del at 0x7f1db5ea4b80>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama.py", line 1076, in del
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
The text was updated successfully, but these errors were encountered: