Skip to content

Conversation

qixiang-99
Copy link
Collaborator

@qixiang-99 qixiang-99 commented Jul 9, 2025

[nvbug][fix] <Enhance ModelConfig for kv cache size calculations>

Description

  • Added support for setting num_kv_heads in ModelConfig.
  • Implemented logic to determine head_size from head_size or head_dim in pretrained_config.
  • Added warnings for default values when configurations are not set.
  • Temporarily remove a Gemma3 1b chunk prefill test case because the kernel compatibility issue.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Added detailed statistics for free cache blocks per attention window size, now accessible in both C++ and Python interfaces.
  • Improvements

    • Enhanced model configuration handling for cache size calculations, providing more robust and explicit parameter management.
    • Improved resource management logic to better support Variable Sliding Window Attention (VSWA) by accurately tracking and reporting free blocks and available tokens.
  • Tests

    • Added and updated integration tests, including new test cases for chunked prefill scenarios (currently skipped due to an upstream issue).

@qixiang-99
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11402 [ run ] triggered by Bot

@jaedeok-nvidia
Copy link
Collaborator

Thanks for fixing this missing part. This PR will resolve the memory underutilization issue of the VSWA model. There are still more things to do. Let's do it step by step. Thanks again for your help!

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11402 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8433 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@qixiang-99 qixiang-99 marked this pull request as ready for review July 9, 2025 17:10
@qixiang-99 qixiang-99 requested a review from a team as a code owner July 9, 2025 17:10
@brb-nv
Copy link
Collaborator

brb-nv commented Jul 9, 2025

Temporarily remove a Gemma3 1b chunk prefill test case because the kernel compatibility issue.

Do colleagues working on kernels know about this? Should we keep the test around and waive it with a bug instead of removing? Chunked prefill is important and we'll eventually need it.

@qixiang-99
Copy link
Collaborator Author

Temporarily remove a Gemma3 1b chunk prefill test case because the kernel compatibility issue.

Do colleagues working on kernels know about this? Should we keep the test around and waive it with a bug instead of removing? Chunked prefill is important and we'll eventually needed.

I will add it back and waive it with related nvbug link, thanks for the suggestion!

@brb-nv
Copy link
Collaborator

brb-nv commented Jul 9, 2025

Just the one comment about keeping the test around.
Rest looks good.

Copy link
Collaborator

@brb-nv brb-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@qixiang-99
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11466 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11466 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8485 completed with status: 'FAILURE'

@qixiang-99
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11471 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11471 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8486 completed with status: 'SUCCESS'

@qixiang-99
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11678 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11678 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8647 completed with status: 'FAILURE'

@qixiang-99 qixiang-99 requested a review from a team as a code owner July 14, 2025 23:59
@qixiang-99 qixiang-99 requested a review from achartier July 14, 2025 23:59
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12021 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12021 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8927 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@qixiang-99 qixiang-99 requested a review from hlu1 July 16, 2025 17:05
- Added support for setting num_kv_heads in ModelConfig.
- Implemented logic to determine head_size from head_size or head_dim in pretrained_config.
- Added warnings for default values when configurations are not set.

Signed-off-by: qixiang-99 <[email protected]>
- expose `num_free_blocks_per_window_size` via kv_cache_stats.
- with `num_free_blocks_per_window_size` , update `get_num_free_blocks()` and `get_num_available_tokens()`

Signed-off-by: qixiang-99 <[email protected]>
@hlu1 hlu1 force-pushed the fix/model-config branch from 9fecb35 to b9855b9 Compare July 16, 2025 20:27
Copy link
Contributor

coderabbitai bot commented Jul 16, 2025

Walkthrough

The changes expand kv cache management and reporting to include per-window-size free block statistics, update Python bindings and resource management logic to support Variable Sliding Window Attention (VSWA), and enhance model configuration handling for kv cache sizing. Integration tests are updated with new test cases and a bug-skipped scenario.

Changes

File(s) Change Summary
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h Extended KvCacheStats with per-window-size free block stats; added method to retrieve these stats.
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp Exposed new stats as a Python property in the bindings.
tensorrt_llm/_torch/model_config.py Improved kv cache size parameter handling in model config logic.
tensorrt_llm/_torch/pyexecutor/resource_manager.py Added VSWA-aware logic for free block and available token calculation using new per-window stats.
tests/integration/defs/accuracy/test_llm_api_pytorch.py Removed a skip decorator; added a bug-skipped test for chunked prefill in Gemma3_1BInstruct.
tests/integration/test_lists/test-db/l0_h100.yml Added two new test cases for Gemma3_1BInstruct to the test list.

Poem

In the cache where windows slide and blocks are free,
A rabbit peeks at stats, as detailed as can be.
VSWA or not, the logic’s now keen,
With tests that skip and configs pristine.
🐇 Hopping through changes, with code so bright—
Every block and token now counted right!

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@hlu1
Copy link
Collaborator

hlu1 commented Jul 16, 2025

/bot reuse-pipeline

@hlu1 hlu1 enabled auto-merge (squash) July 16, 2025 20:27
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12113 [ reuse-pipeline ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (1)

34-41: <map> header missing – compilation depends on transitive includes

std::map is used in several declarations (e.g. BlocksPerWindow, the new numFreeBlocksPerWindowSize) but <map> is not explicitly included. Relying on a transitive include is brittle and may break with future standard-library or upstream header changes.

+#include <map>
🧹 Nitpick comments (4)
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (2)

1462-1466: Consider returning a const& to avoid an extra copy

getNumFreeBlocksPerWindowSize() currently copies the map on every call.
If callers only need read-only access (which seems to be the case), returning a const std::map<…>& would avoid the extra allocation and copy.

-[[nodiscard]] std::map<SizeType32, SizeType32> getNumFreeBlocksPerWindowSize() const
+[[nodiscard]] std::map<SizeType32, SizeType32> const& getNumFreeBlocksPerWindowSize() const

That would require BlockManager::getNumFreeBlocksPerWindowSize() to return a reference as well.
Not blocking, but worth considering for hot paths.


1478-1482: Move-construct the map to avoid one copy

When populating the stats object we can transfer ownership of the temporary map produced by getNumFreeBlocksPerWindowSize():

-        kvCacheStats.numFreeBlocksPerWindowSize = getNumFreeBlocksPerWindowSize();
+        kvCacheStats.numFreeBlocksPerWindowSize = std::move(getNumFreeBlocksPerWindowSize());

This yields a move instead of a deep copy (requires <utility> include). Minor, but free.

tensorrt_llm/_torch/pyexecutor/resource_manager.py (2)

525-533: Fix line length and consider logging optimization.

The VSWA logic is correct, but there are a few improvements needed:

  1. Line 528 exceeds the 120-character limit (static analysis flag)
  2. Info-level logging might be too verbose for production use
  3. Consider caching the stats object to avoid repeated property access

Apply this diff to address these issues:

-        if self.is_vswa:
-            logger.info(
-                f"For VSWA case, we return the minimum of the number of free blocks for each window size: {self.impl.get_kv_cache_stats().num_free_blocks_per_window_size}"
-            )
-            return min(self.impl.get_kv_cache_stats().
-                       num_free_blocks_per_window_size.values())
-        else:
-            return self.impl.get_kv_cache_stats().free_num_blocks
+        if self.is_vswa:
+            stats = self.impl.get_kv_cache_stats()
+            logger.debug(
+                f"VSWA: returning minimum free blocks across window sizes: "
+                f"{stats.num_free_blocks_per_window_size}"
+            )
+            return min(stats.num_free_blocks_per_window_size.values())
+        else:
+            return self.impl.get_kv_cache_stats().free_num_blocks

538-548: Improve consistency and optimize repeated property access.

The VSWA logic is correct, but there are consistency and performance improvements to consider:

  1. Use self.is_vswa consistently instead of checking len(self.max_attention_window_vec) > 1
  2. Cache the stats object to avoid repeated property access
  3. Simplify the conditional structure for better readability

Apply this diff to improve consistency and performance:

-        if self.max_attention_window_vec and len(
-                self.max_attention_window_vec) > 1:
-            # VSWA case, the available tokens should the the minimum of the available tokens for each window size
-            min_free_blocks = min(self.impl.get_kv_cache_stats().
-                                  num_free_blocks_per_window_size.values())
-            res = min_free_blocks * self.tokens_per_block - self.num_extra_kv_tokens - max_num_draft_tokens
-        else:
-            res = (self.get_num_free_blocks() * self.tokens_per_block -
-                   self.num_extra_kv_tokens - max_num_draft_tokens)
-        return res
+        if self.is_vswa:
+            # VSWA case: use minimum free blocks across all window sizes
+            stats = self.impl.get_kv_cache_stats()
+            min_free_blocks = min(stats.num_free_blocks_per_window_size.values())
+            available_tokens = min_free_blocks * self.tokens_per_block
+        else:
+            available_tokens = self.get_num_free_blocks() * self.tokens_per_block
+        
+        return available_tokens - self.num_extra_kv_tokens - max_num_draft_tokens
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fa34cb7 and b9855b9.

📒 Files selected for processing (6)
  • cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (3 hunks)
  • cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp (1 hunks)
  • tensorrt_llm/_torch/model_config.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py (2 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
🧰 Additional context used
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/resource_manager.py

528-528: Line too long (171 > 120)

(E501)

🔇 Additional comments (8)
tests/integration/test_lists/test-db/l0_h100.yml (1)

27-27: LGTM: VSWA test addition aligns with PR objectives.

The addition of test_auto_dtype_vswa test case is consistent with the PR's goal of enhancing VSWA support and kv cache management improvements.

cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp (1)

301-302: LGTM: Property binding follows established patterns.

The addition of num_free_blocks_per_window_size property to the KvCacheStats binding is correct and follows the same pattern as other properties in the class. This change enables Python access to per-window-size free block statistics, which supports the VSWA functionality as described in the PR objectives.

tensorrt_llm/_torch/model_config.py (3)

309-316: LGTM: Improved robustness with user feedback.

The logic to handle tokens_per_block parameter is well-implemented:

  • Provides clear warning when the parameter is not set
  • Uses appropriate default fallback behavior
  • Ensures the parameter is set when provided

This enhances the robustness of the kv cache size calculation configuration.


317-322: LGTM: Correct kv heads calculation for parallelization.

The calculation of num_kv_heads is correctly implemented:

  • Properly retrieves num_key_value_heads from pretrained config with fallback to num_heads
  • Correctly accounts for tensor parallelization (tp_size) and context parallelization (cp_size)
  • Uses appropriate getter method to retrieve the attribute

This supports the enhanced kv cache management for VSWA functionality.


344-355: LGTM: Robust head size determination with fallback logic.

The enhanced logic for determining size_per_head is well-designed:

  • Iterates through multiple possible attribute names (head_size, head_dim)
  • Uses the first available attribute found
  • Provides clear warning when falling back to computed value
  • Maintains backward compatibility with existing configurations

This improves the robustness of head size detection across different model configurations.

tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

539-557: Well-implemented test for chunked prefill with VSWA configuration.

The test method correctly implements chunked prefill testing with Variable Sliding Window Attention (VSWA) configuration. The skip decorator is appropriately applied with a clear reference to the kernel compatibility issue (nvbug 5338620), which aligns with the PR objectives of temporarily handling kernel compatibility issues.

The test configuration properly:

  • Uses the class-level kv_cache_config with block reuse disabled
  • Sets up VSWA with Gemma3 1B specific attention window pattern
  • Enables chunked prefill with reasonable max_num_tokens setting
  • Runs appropriate evaluation task
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (1)

183-185: Extension of KvCacheStats looks good

The addition of numFreeBlocksPerWindowSize cleanly extends the stats struct without breaking existing initialisation semantics (default-constructed std::map is empty).

tensorrt_llm/_torch/pyexecutor/resource_manager.py (1)

196-196: Good refactoring to centralize VSWA condition.

Converting the VSWA check to an instance variable improves code maintainability and avoids repeated calculations throughout the class methods.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12113 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #12021 for commit b9855b9

@hlu1 hlu1 merged commit e09e409 into NVIDIA:main Jul 16, 2025
4 checks passed
yizhang-nv pushed a commit to yizhang-nv/TensorRT-LLM that referenced this pull request Jul 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants