Skip to content

Support For ggml format for gpt4all #696

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
chigkim opened this issue Apr 2, 2023 · 3 comments
Closed

Support For ggml format for gpt4all #696

chigkim opened this issue Apr 2, 2023 · 3 comments

Comments

@chigkim
Copy link

chigkim commented Apr 2, 2023

When I convert Llama model with convert-pth-to-ggml.py, quantize to 4bit, and load it with gpt4all, I get this:
llama_model_load: invalid model file 'ggml-model-q4_0.bin' (bad magic)
Could you implement to support ggml format that gpt4all uses?
Thanks!

@linouxis9
Copy link

Try running the migrate script as detailed in this PR: #690

@chigkim
Copy link
Author

chigkim commented Apr 2, 2023 via email

@prusnak
Copy link
Collaborator

prusnak commented Apr 2, 2023

You should open an issue in the gpt4all repository, since what you are asking is essentially them to support a certain format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants