-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Build] Allow shipping PTX on a per-file basis #18155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Build] Allow shipping PTX on a per-file basis #18155
Conversation
Signed-off-by: Lucas Wilkinson <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
endif() | ||
|
||
list(SORT SRC_CUDA_ARCHS COMPARE NATURAL ORDER ASCENDING) | ||
list(SORT _SRC_CUDA_ARCHS COMPARE NATURAL ORDER ASCENDING) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do the _TGT_CUDA_ARCHS
need to be sorted too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for a more general utility you're right they should be!; but the target arches come from extract_unique_cuda_archs_ascending
so they are already sorted. I can open up a now PR to refactor some of this though; kinda want to preserve current behavior as much as possible in this one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems relatively safe to me. There might be a regression for marlin because of slow bf16 convert on A100 (IIRC) that might transfer to newer hardware, but also might not. Ultimately shouldn't be that big of a deal. @jinzhen-lin please step in if you have concerns with this since you refactored marlin most recently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The failures look closely related
Currently, I believe that the performance of |
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Failures resolved |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice middle ground, thanks for getting it working!
Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: Yuqi Zhang <[email protected]>
323.20 MB -> 324.36 MB
To help with the growing wheel size due to Blackwell allow for shipping PTX for heavy kernels that don't take advantage of new hardware features. Theres enough different gencodes now for certain kernels it makes sense to ship a single PTX implementation instead of multiple SASS. Currently mildly grows the wheel size but should help keep it capped as the Blackwell gencodes are added