Skip to content

Conversation

AminAlam
Copy link

@AminAlam AminAlam commented Sep 28, 2024

Error handling and informative error message when model multimodal config initialisation fails due to lack of architecture in the config

FIX #8923

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

DarkLight1337 commented Sep 28, 2024

Actually, I found that we have quite a few places where we try to find architectures from the HF config. Would it make more sense to factor this out (to some file under vllm.transformers_utils) to provide more useful errors in those other places as well?

@ywang96
Copy link
Member

ywang96 commented Sep 29, 2024

Hey @AminAlam Thanks for the PR!

In vLLM, we assume that model repos that are in the huggingface format will have architectures in their config.json. If this is not the case, could you point me to the model repos that do not have such key in their config file?

@AminAlam
Copy link
Author

Actually, I found that we have quite a few places where we try to find architectures from the HF config. Would it make more sense to factor this out (to some file under vllm.transformers_utils) to provide more useful errors in those other places as well?

That's a good idea @DarkLight1337

Hey @AminAlam Thanks for the PR!

In vLLM, we assume that model repos that are in the huggingface format will have architectures in their config.json. If this is not the case, could you point me to the model repos that do not have such key in their config file?

Hey @ywang96, This is the model I'm trying to deploy: TheBloke/LLaMa-7B-GGML

@ywang96
Copy link
Member

ywang96 commented Sep 29, 2024

Hey @ywang96, This is the model I'm trying to deploy: TheBloke/LLaMa-7B-GGML

We don’t support deploying from GGML format on vLLM currently.

@AminAlam
Copy link
Author

AminAlam commented Sep 29, 2024

Hey @ywang96, This is the model I'm trying to deploy: TheBloke/LLaMa-7B-GGML

We don’t support deploying from GGML format on vLLM currently.

I see, thanks for mentioning it. I think it makes sense to keep the error handling and add more information to the error messages in case someone uses the wrong format.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Oct 4, 2024

Actually, I found that we have quite a few places where we try to find architectures from the HF config. Would it make more sense to factor this out (to some file under vllm.transformers_utils) to provide more useful errors in those other places as well?

Closing as superseded by #7168, thanks for raising this though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Model multimodal config initialisation unhandled and irrelevant error when no architectures found

3 participants