-
-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Fix: Proper RGBA -> RGB conversion for PIL images. #18508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
vllm/multimodal/image.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you could just make a image_to_image_mode function that has this conditional inside of it
Thanks for reporting and fixing this issue! Can you add a unit test to avoid future regressions? |
Done. |
Hmm actually, I see other places where |
Let me check. |
Updated existing call sites. |
Please also merge from main to fix CI failures. |
Head branch was pushed to by a user without write access
This pull request has merge conflicts that must be resolved before it can be |
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: Signed-off-by: Chenheli Hua <[email protected]>
Directly converting RGBA to RGB via
convert
on PIL.Image produces a strange background as demonstrated in the picture:Test plan:
Unit test:
pytest tests/multimodal/test_image.py -s
Local server:
vllm serve /home/huachenheli/local/llm/huggingface/llama4/Llama-4-Scout-17B-16E-Instruct --gpu-memory-utilization 0.5 --tensor-parallel-size 8 --max-model-len 65536
Without this change:
With this change: