-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Bug: "ValueError: Invalid GGUF metadata value type or value" due to missing tags in the model card #8769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Reformatting the above log because it's all in one line. Also attempted to replicate via replacing my test repo readme with this, but unable to replicate the error on my end via library_name: transformers
license: apache-2.0
language:
- en
- vi
... That being said... we really should improve the error message. It's a bit cryptic. |
Can you change it to 'tags: []' as the image above? |
Okay thanks. Traced and fix incoming |
@vTuanpham all fixed? |
@mofosyne Confirmed fixed, thank you! |
What happened?
Cannot quantized model due to missing metadata in the model card when using transformers model.push_to_hub does not provide it.
The problem is resolved when the user manually provide the tags.
Name and Version
Running on ggml-org/gguf-my-repo on huggingface spaces and can be reproduce on the latest llama.cpp version via colab.
What operating system are you seeing the problem on?
Linux, Other? (Please let us know in description)
Relevant log output
The text was updated successfully, but these errors were encountered: