-
-
Notifications
You must be signed in to change notification settings - Fork 10.5k
[ROCm] Enable custom paged attention kernel for Navi3/4 #13843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
47d29aa
to
7128062
Compare
This pull request has merge conflicts that must be resolved before it can be |
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Hosang Yoon <[email protected]>
Signed-off-by: Hosang Yoon <[email protected]>
Do we know how this stacks up against the new AMD triton kernels? (cc @SageMoore for direction to the kernels) |
Unfortunately, that kernel is only integrated into V1 right now. We should definitely integrate that kernel into V0 and see what the performance is like, though. Here's the kernel in question if you are interested: https://github.com/vllm-project/vllm/blob/main/vllm/attention/ops/chunked_prefill_paged_decode.py#L186 |
@tlrmchlsmth and @WoosukKwon, could you please help to review and approve this PR, which provides good e2e perf gain for SOTA models running on AMD Radeon GPUs using vLLM? Thanks. |
Hi @hyoon1, thank you for your contribution. I am hesitant to review and accept this PR, mainly because it only applies to vLLM V0. We are imminently going to switch to V1 by default starting with the 0.8.0 release, which will bring with it large performance improvements. V1 natively uses chunked-prefill, and as I understand it, this kernel doesn't fit easily into that case. For V1 I think we have a good solution in the triton kernels added in #14152, but would also be interested in seeing how the kernels compare |
Hi @tlrmchlsmth thanks for letting me know about new V1 kernel. As you mentioned, there seem to be significant performance improvements in V1 due to the new prefill/decode method. As a result, I compared the performance of V0 with custom_paged_attention (CPA) applied to V1 with the new kernel. The comparison showed that, as of now, the approach with CPA applied to V0 performs better on AMD Navi GPUs in terms of output token throughput. Although it seems that V1 is not yet fully optimized, we do have customers who desire high performance with AMD Navi GPUs. Therefore, until V1 surpasses the optimized V0 in performance, we want to offer the optimized V0 option. Additionally, we have other optimization elements that can further improve performance, and we plan to submit more pull requests. Here are benchmark results from Navi3x/Navi4x GPU system:
Navi4 GPU
V0 + this PR (CPA)
Navi3 GPU
V0 + this PR (CPA)
|
This pull request has merge conflicts that must be resolved before it can be |
Closing this PR in favor of new v1 support: #17004 |
Add additional custom paged attention kernels for AMD Navi 3x/4x GPU support based on PR: #12348
Due to the differences in architecture from MI, specific instructions and detailed logic have changed (mfma16 -> wmma16/wmma16_gfx12), so new kernels for each architecture has been added.
Performance Gain
Script: python ./benchmarks/benchmark_throughput.py --model <path_to_model> --trust-remote-code --dataset <ShareGPT_V3_unfiltered_cleaned_split.json> --num_prompts 1000 --max-model-len 4096 --gpu-memory-utilization 0.95
Navi 3
Navi 4