Skip to content

Conversation

tdoublep
Copy link
Member

@tdoublep tdoublep commented Jul 24, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

This PR changes the layout of the FlashAttention backend so it matches the "FlashInfer layout" (num_blocks, 2, ...) rather than (2, num_blocks, ...). This make memory management for hybrid models significantly easier.

I tried to make the changes to the FlexAttention backend too but ran into some problems, perhaps we could address that separately. cc @LucasWilkinson

Triton backend changes are already addressed by #21197

I have tried to change the nixl logic accordingly but could definitely use your eyes on it @NickLucche

Test Plan

$ VLLM_ATTENTION_BACKEND=FLASH_ATTN_V1 lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct --tasks gsm8k --num_fewshot 5 --batch_size auto --limit 500

Test Result

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.792|±  |0.0182|
|     |       |strict-match    |     5|exact_match|↑  |0.770|±  |0.0188|

(Optional) Documentation Update

Signed-off-by: Thomas Parnell <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Jul 24, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the KV cache layout from (2, num_blocks, ...) to (num_blocks, 2, ...) across various attention backends. The changes in flash_attn.py and flex_attention.py are consistent and correctly update the unbind operation to match the new layout. However, in triton_attn.py, the conditional logic to support both old and new layouts introduces a critical bug. When VLLM_V1_USE_PREFILL_DECODE_ATTENTION is enabled, get_kv_cache_shape returns a 5D tensor for the old layout path, which is incompatible with PagedAttention.split_kv_cache that expects a 3D tensor. This will lead to a runtime error. A fix is proposed to return the correctly shaped 3D tensor for this path.

@tdoublep tdoublep changed the title [V1] [Kernel] Change KV cache layout to (num_blocks, 2, ...) for all attention backends [V1] [Kernel] Change KV cache layout to (num_blocks, 2, ...) for FlashAttention backend Jul 24, 2025
@tdoublep tdoublep marked this pull request as ready for review July 24, 2025 19:15
@LucasWilkinson LucasWilkinson added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 24, 2025
Copy link

mergify bot commented Jul 26, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @tdoublep.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 26, 2025
tdoublep added 2 commits July 29, 2025 10:17
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
@mergify mergify bot removed the needs-rebase label Jul 29, 2025
@tdoublep
Copy link
Member Author

I have debugged few more issues with failing tests. I expect this next build to pass all CI tests.

cc @LucasWilkinson @NickLucche @tlrmchlsmth

@NickLucche
Copy link
Collaborator

We weren't so lucky with models/test_initialization.py::test_can_initialize[HunYuanDenseV1ForCausalLM], but let's check if it's related or just a version mismatch

@tdoublep
Copy link
Member Author

tdoublep commented Aug 1, 2025

@NickLucche That error seems to be fixed on main; trying again now.

@tdoublep
Copy link
Member Author

tdoublep commented Aug 1, 2025

I don't understand why the entrypoints-test-api-server keep failing. If I try and run the failing tests locally, they are passing. @NickLucche @LucasWilkinson

update: it appears to be running into CudaOOM in the CI (which I think runs on L4), which explains why they are passing locally for me. it does suggest a real problem, this PR shouldn't change the memory usage....will debug

@tdoublep
Copy link
Member Author

tdoublep commented Aug 1, 2025

I tried running the same tests on L4 GPU locally and they pass, so something is strange here?

@NickLucche
Copy link
Collaborator

We might have better luck this time. These tests are being a real nuisance lately.

Anyways, to recap situation, we're still waiting for #20189 to land to avoid breaking llmd integration.

@tdoublep
Copy link
Member Author

tdoublep commented Aug 2, 2025

Yeah. I don't know what is up with the tests at the moment. I just wanted to see if this change works in principle. Let's make sure #20189 lands before we merge this one.

@tdoublep
Copy link
Member Author

tdoublep commented Aug 3, 2025

Some of these distributed errors look legit - will investigate

@tdoublep
Copy link
Member Author

tdoublep commented Aug 7, 2025

I've gone through the (many) failing CI checks but all of them look like things that have either been fixed on main in the last day or so, or things that are unrelated.

@mgoin
Copy link
Member

mgoin commented Aug 10, 2025

Have there been any performance checks for this change?

@tdoublep
Copy link
Member Author

tdoublep commented Aug 13, 2025

@mgoin No, we need to do that. Still looks like there are some correctness issues I need to address first though given all the failing CI tests.

There is also #20189 that we need to land first to prevent breaking llm-d integration.

@mergify mergify bot added the kv-connector label Sep 19, 2025
Copy link

mergify bot commented Sep 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @tdoublep.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kv-connector needs-rebase ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants