-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-7009][fix] Support cuda graph padding for qwen-vl models #7235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>
📝 WalkthroughWalkthroughAdds padding logic in the CUDA-graph path for multimodal inputs: when mrope_position_deltas count is shorter than generation_requests (due to CUDA-graph dummy requests), append dummy tensors to align lengths, log the padding count, assert final parity, then concatenate inputs as before. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant S as Scheduler
participant ME as ModelEngine
participant CG as CUDA Graph Exec
S->>ME: Provide scheduled_requests (incl. generation_requests)
ME->>ME: Build mrope_position_deltas_list (multimodal)
alt lengths mismatch (due to CUDA-graph dummies)
ME->>ME: Create dummy_tensor (shape of first delta)
ME->>ME: Append dummy_tensor for trailing is_cuda_graph_dummy
ME->>ME: Assert lengths match generation_requests
else lengths match
ME->>ME: Proceed
end
ME->>ME: Concatenate inputs (incl. padded deltas)
ME->>CG: Launch CUDA-graph execution
CG-->>ME: Outputs
ME-->>S: Generation results
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
1522-1534
: Make padding logic robust and O(1): compute missing count and pad in one shot.Today you rely on trailing dummy requests and append in a loop. If that invariant ever changes, the assert will still fire. Compute:
- expected = len(generation_requests)
- actual = len(mrope_position_deltas_list)
- missing = expected - actual
- trailing_dummy = number of trailing is_cuda_graph_dummy
Then pad min(missing, trailing_dummy) in one extend() to guarantee ordering and avoid per-iteration Python overhead.
Would you confirm that CUDA-graph dummy generation requests are always trailing in
scheduled_requests.generation_requests
? If not, we should gate the padding with an explicit check and fail early with a clearer error.Suggested refactor:
- if len(mrope_position_deltas_list) != len( - scheduled_requests.generation_requests): - dummy_tensor = torch.empty_like( - mrope_position_deltas_list[0]) - logger.debug(f"[DEBUG] CUDA Graph MROPE Padding:") - padding_count = 0 - for request in reversed( - scheduled_requests.generation_requests): - if request.is_cuda_graph_dummy: - mrope_position_deltas_list.append(dummy_tensor) - padding_count += 1 - else: - break - logger.debug( - f" - Total padded: {padding_count} dummy tensors for mrope_position_deltas" - ) + expected = len(scheduled_requests.generation_requests) + actual = len(mrope_position_deltas_list) + if actual != expected: + dummy_tensor = torch.zeros_like(mrope_position_deltas_list[0]) + # Count trailing CUDA-graph dummies + trailing_dummy = 0 + for req in reversed(scheduled_requests.generation_requests): + if req.is_cuda_graph_dummy: + trailing_dummy += 1 + else: + break + missing = expected - actual + pad = min(missing, trailing_dummy) + if pad > 0: + mrope_position_deltas_list.extend([dummy_tensor] * pad) + logger.debug(f"CUDA Graph MROPE padding: padded={pad}, missing={missing}, trailing_dummy={trailing_dummy}")
1535-1539
: Fix Ruff E501 and streamline logging.
- The assert message exceeds 120 chars (Ruff E501).
- Also, drop the "[DEBUG]" prefix; the logger already indicates level.
Apply this diff:
- logger.debug( - f" - Total padded: {padding_count} dummy tensors for mrope_position_deltas" - ) - assert len(mrope_position_deltas_list) == len(scheduled_requests.generation_requests), \ - f"MROPE deltas mismatch: {len(mrope_position_deltas_list)} != {len(scheduled_requests.generation_requests)}" + expected = len(scheduled_requests.generation_requests) + actual = len(mrope_position_deltas_list) + assert actual == expected, ( + f"MROPE deltas mismatch: actual={actual} expected={expected}" + )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6728
File: cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp:966-966
Timestamp: 2025-08-08T04:10:19.038Z
Learning: TensorRT plugins currently don't support padding functionality, and TensorRT is not getting new features (in maintenance mode). This means that duplicating parameters like mExpertHiddenSize in function calls, even with TODO comments, can be acceptable as pragmatic solutions within these constraints.
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-08T04:10:19.038Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6728
File: cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp:966-966
Timestamp: 2025-08-08T04:10:19.038Z
Learning: TensorRT plugins currently don't support padding functionality, and TensorRT is not getting new features (in maintenance mode). This means that duplicating parameters like mExpertHiddenSize in function calls, even with TODO comments, can be acceptable as pragmatic solutions within these constraints.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
tensorrt_llm/logger.py (1)
debug
(143-144)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/model_engine.py
1539-1539: Line too long (128 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
1540-1542
: Sanity check: concat dimension correctness.Confirm that each per-request
mrope_position_deltas
is shaped so thattorch.cat(..., dim=0)
forms the expected batch-major layout the model consumes. If per-request tensors already carry a batch dimension,dim=0
could double-count. If needed, consider stacking and reshaping to the exact layout the kernel expects.
dummy_tensor = torch.empty_like( | ||
mrope_position_deltas_list[0]) | ||
logger.debug(f"[DEBUG] CUDA Graph MROPE Padding:") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use zeros_like instead of empty_like to avoid uninitialized memory in model math.
torch.empty_like
leaves the tensor uninitialized; downstream MROPE computations could ingest garbage values. For padding dummies we want a neutral delta (zeros).
Apply this diff:
- dummy_tensor = torch.empty_like(
- mrope_position_deltas_list[0])
+ dummy_tensor = torch.zeros_like(
+ mrope_position_deltas_list[0])
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dummy_tensor = torch.empty_like( | |
mrope_position_deltas_list[0]) | |
logger.debug(f"[DEBUG] CUDA Graph MROPE Padding:") | |
dummy_tensor = torch.zeros_like( | |
mrope_position_deltas_list[0]) | |
logger.debug(f"[DEBUG] CUDA Graph MROPE Padding:") |
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/pyexecutor/model_engine.py around lines 1524 to 1526,
replace the use of torch.empty_like when creating the dummy_tensor for MROPE
padding with torch.zeros_like so the tensor is initialized to zeros (a neutral
delta) instead of containing uninitialized memory; update the call and any
related variable naming/comments if needed to reflect that the padding uses
zeroed tensors to avoid propagating garbage values into MROPE computations.
Close as dup of PR-7122 |
Summary by CodeRabbit
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.