-
Notifications
You must be signed in to change notification settings - Fork 1.7k
chore: remove unused variables in pyexecutor #6280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: junq <[email protected]>
WalkthroughThis change removes the Changes
Sequence Diagram(s)No sequence diagram generated, as the changes are limited to attribute and logic removal without altering control flow. Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
Suggested reviewers
Poem
Note ⚡️ Unit Test Generation - BetaCodeRabbit's unit test generation is now available in Beta! Automatically generate comprehensive unit tests for your code changes, ensuring better test coverage and catching edge cases you might miss. Our AI analyzes your code structure and creates tests that follow best practices and your project's testing patterns. Learn more here, or just try it under ✨ Finishing Touches. ✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
/bot run |
PR_Github #12662 [ run ] triggered by Bot |
PR_Github #12662 [ run ] completed with state |
/bot run |
PR_Github #12668 [ run ] triggered by Bot |
PR_Github #12668 [ run ] completed with state |
/bot run |
PR_Github #12702 [ run ] triggered by Bot |
PR_Github #12702 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: junq <[email protected]>
/bot skip --comment "skip for code comment changes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
1152-1152
: Documentation improvement enhances clarity.The updated docstring accurately reflects that this method specifically pads with generation dummy requests, which aligns with the hardcoded
is_gen=True
parameter. However, the line exceeds the 120-character limit.Consider wrapping the docstring to stay within line length limits:
- Pad with a generation dummy request, if required, to ensure every attention_dp rank has at least one active request. + Pad with a generation dummy request, if required, to ensure every + attention_dp rank has at least one active request.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
Learnt from: yechank-nvidia
PR: #6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using from_shared_tensor()
is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call strip_for_generation()
to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
1152-1152: Line too long (124 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
1170-1171
: LGTM! Simplified logic removes unused variable dependency.The change from conditional flags based on
self.has_context_request
to unconditionalTrue
values correctly reflects the removal of context request tracking. This simplification aligns with the method's purpose of always creating generation dummy requests for attention DP padding.
PR_Github #12765 [ skip ] triggered by Bot |
PR_Github #12765 [ skip ] completed with state |
Signed-off-by: junq <[email protected]> Signed-off-by: Shreyas Misra <[email protected]>
Signed-off-by: junq <[email protected]> Signed-off-by: Ransiki Zhang <[email protected]>
Signed-off-by: junq <[email protected]> Signed-off-by: Lanyu Liao <[email protected]>
Summary by CodeRabbit