-
-
Notifications
You must be signed in to change notification settings - Fork 10.4k
[Bugfix] fix qwen3 moe fp8 accuracy issue #23031
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] fix qwen3 moe fp8 accuracy issue #23031
Conversation
Signed-off-by: Jinzhen Lin <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an FP8 accuracy issue for MoE models like Qwen3. The root cause was that layers intended to be skipped were being quantized because the configuration key modules_to_not_convert
was not being checked. The fix correctly adds a fallback to check for this key if ignored_layers
is not found or is empty. This change ensures compatibility with Hugging Face's quantization configuration format and resolves the accuracy problem. The implementation is correct and well-targeted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also add some lm-eval results to show the problem has already fixed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
verified locally as well. merging.
qwen3-30b-fp8 Before
After
|
Signed-off-by: Jinzhen Lin <[email protected]> Signed-off-by: Yiwen Chen <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> Signed-off-by: Duncan Moss <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> Signed-off-by: Boyuan Feng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Fix #22881 .
The origin issue is introduced by #22017 . The
gate
layer is initilized with fp8 quantization, but the origin weight is bf16.