-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-8031][feat] Add chunked return_generation_logits logic #7831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
46bbc9d
to
b20f62d
Compare
/bot run |
PR_Github #19120 [ run ] triggered by Bot |
PR_Github #19120 [ run ] completed with state |
81e9d5c
to
9458741
Compare
/bot run |
PR_Github #19330 [ run ] triggered by Bot |
PR_Github #19330 [ run ] completed with state |
/bot run --disable-fail-fast |
📝 WalkthroughWalkthroughAdds optional chunked logits transfer across request/result pathways. LogitsStorage supports device-side fragment buffering, chunked host transfers, and finalization. PyResult and LlmRequest propagate configuration and expose post-processing finalize. handle_logits now triggers finalize transfer for all requests in chunked mode. Comprehensive tests added. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Executor
participant PyResult
participant LogitsStorage as LogitsStorage (ctx/gen)
Client->>Executor: Submit LlmRequest(use_chunked_logits, chunk_size)
Executor->>PyResult: Create (propagate chunk settings)
note right of PyResult: Initializes LogitsStorage for context/generation
loop Token steps
Executor->>LogitsStorage: _add_fragment(logits) (device)
alt chunk full
LogitsStorage->>LogitsStorage: _transfer_chunk_to_host()
note right of LogitsStorage: Merge device fragments<br/>Transfer to host<br/>Update indices/position
else not full
note right of LogitsStorage: Buffer fragments on device
end
end
Executor->>PyResult: post_processing_transfer()
PyResult->>LogitsStorage: finalize_transfer() for ctx/gen
note right of LogitsStorage: Flush remaining fragments to host
Client->>PyResult: Access context_logits / generation_logits
PyResult-->>Client: Return host-side tensors (if any)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/pyexecutor/llm_request.py (2)
526-526
: Missing return statement for create_child_request.The method modifies
self.child_requests
but doesn't return the createdpy_request
. This could be confusing for callers expecting the child request object.self.child_requests.append(py_request) +return py_request
564-657
: Add support for chunked logits parameters in executor_request_to_llm_request.The function doesn't extract
use_chunked_logits
andlogits_chunk_size
from the executor request, which means these parameters can't be configured per-request from the executor API. This limits the flexibility of the chunked logits feature.def executor_request_to_llm_request( req_id: int, executor_request: ExecutorRequest, child_req_ids: List[int], exclude_last_generation_logits: bool, input_token_ids: Optional[List] = None) -> LlmRequest: executor_sampling_config = executor_request.sampling_config sampling_config = SamplingConfig(executor_sampling_config) # ... existing code ... + # Extract chunked logits parameters if present + use_chunked_logits = getattr(executor_request, "use_chunked_logits", True) + logits_chunk_size = getattr(executor_request, "logits_chunk_size", 8) + llm_request = LlmRequest( request_id=req_id, # ... existing parameters ... + use_chunked_logits=use_chunked_logits, + logits_chunk_size=logits_chunk_size, py_multimodal_data=getattr(executor_request, "py_multimodal_data", None))
🧹 Nitpick comments (4)
tests/unittest/_torch/executor/test_chunked_logits.py (2)
159-160
: Strengthen assertion with explicit storage data check.The test only verifies that storage is allocated but doesn't check if the logits were actually copied correctly.
# Should have storage allocated assert storage._storage is not None assert len(storage._logits_indices) == 1 assert storage._logits_indices[0] == (0, 1) +# Verify the logits were copied correctly +assert torch.allclose(storage._storage[0:1], sample_logits)
846-852
: Add proper conditional import for optional dependency.The
psutil
import could fail if the package is not installed. Consider making this test conditional or handling the import error gracefully.def test_memory_usage_comparison(self, sample_logits): """Test memory usage comparison between chunked and non-chunked modes""" import os - - import psutil + + try: + import psutil + except ImportError: + pytest.skip("psutil not available for memory usage test")tensorrt_llm/_torch/pyexecutor/llm_request.py (2)
158-161
: Remove or explain commented-out code.The commented-out code for initializing storage should either be removed or have a clear explanation of why it's kept.
# Allocate host storage if needed assert self._storage is not None, "Storage should be initialized" -# if self._storage is None: -# self._init(self._device_fragments[0])
120-123
: Consider moving long error message to a variable.While not critical, static analysis suggests avoiding long messages directly in exception constructors for better maintainability.
+ overflow_msg = ( + f"LogitsStorage overflow. This storage can only hold {self.seq_length} logits " + f"({position} already filled) but trying to append {logits.size(0)} more logits" + ) raise ValueError( - f"LogitsStorage overflow. This storage can only hold {self.seq_length} logits " - f"({position} already filled) but trying to append {logits.size(0)} more logits" + overflow_msg )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/handle_logits.py
(1 hunks)tensorrt_llm/_torch/pyexecutor/llm_request.py
(7 hunks)tests/unittest/_torch/executor/test_chunked_logits.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/_torch/pyexecutor/handle_logits.py
tensorrt_llm/_torch/pyexecutor/llm_request.py
tests/unittest/_torch/executor/test_chunked_logits.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/_torch/pyexecutor/handle_logits.py
tensorrt_llm/_torch/pyexecutor/llm_request.py
tests/unittest/_torch/executor/test_chunked_logits.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/_torch/pyexecutor/handle_logits.py
tensorrt_llm/_torch/pyexecutor/llm_request.py
tests/unittest/_torch/executor/test_chunked_logits.py
🧠 Learnings (2)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/unittest/_torch/executor/test_chunked_logits.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
PR: NVIDIA/TensorRT-LLM#7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/unittest/_torch/executor/test_chunked_logits.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/pyexecutor/handle_logits.py (1)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
post_processing_transfer
(278-283)
tests/unittest/_torch/executor/test_chunked_logits.py (1)
tensorrt_llm/_torch/pyexecutor/llm_request.py (13)
LlmRequest
(371-524)LogitsStorage
(42-182)PyResult
(227-324)executor_request_to_llm_request
(564-657)finalize_transfer
(175-179)set_exclude_last
(181-182)append_context_logits
(260-262)append_generation_logits
(264-266)post_processing_transfer
(278-283)generation_logits
(303-312)log_probs
(315-316)cum_log_probs
(319-320)create_child_request
(500-524)
🪛 Ruff (0.13.1)
tensorrt_llm/_torch/pyexecutor/llm_request.py
120-123: Avoid specifying long messages outside the exception class
(TRY003)
tests/unittest/_torch/executor/test_chunked_logits.py
1-1: Shebang is present but file is not executable
(EXE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/_torch/pyexecutor/handle_logits.py (1)
83-87
: LGTM! Chunked logits finalization correctly implemented.The post-processing step appropriately finalizes pending logits transfers for requests using chunked mode, ensuring all accumulated device fragments are transferred to host memory before the function completes.
tensorrt_llm/_torch/pyexecutor/llm_request.py (2)
51-51
: Add comment about TRT backend logic as suggested in past review.As tijyojwad noted in a previous review, this logic is borrowed from the TRT backend approach.
- ): # logic adpted from HandleGenerationLogits.cpp to use chunked transfer + ): # Logic adapted from HandleGenerationLogits.cpp (TRT backend) to use chunked transfer
59-60
: Consider simplifying streaming logic per past review suggestion.As tijyojwad suggested in a previous review, setting chunk_size=1 when streaming mode is on could simplify downstream logic.
The streaming mode logic could be simplified by setting chunk_size=1 in the constructor:
self.use_chunked_logits = use_chunked_logits -self.chunk_size = chunk_size +# In streaming mode, use chunk_size=1 for immediate transfers +self.chunk_size = 1 if streaming else chunk_sizeThen update the fragment transfer logic to rely solely on chunk_size:
-# Streaming mode: transfer immediately after each fragment (self.chunk_size=1). -# Non-streaming mode: batch transfer every chunk_size steps. +# Transfer when we've accumulated chunk_size fragments if len(self._device_fragments) == self.chunk_size: self._transfer_chunk_to_host()Note: I see the test file assumes a
streaming
parameter exists, but it's not in the actual implementation. If you decide to add streaming support, this refactor would make the code cleaner.
PR_Github #19706 [ run ] triggered by Bot |
PR_Github #19706 [ run ] completed with state |
#7580 just got merged, where i have touched similar files. but the changes shouldn't conflict much JFYI. Thanks! |
b81bd97
to
572ae3f
Compare
PR_Github #19983 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. minor comments
b66fb5e
to
f85e1aa
Compare
/bot run |
PR_Github #20038 [ run ] triggered by Bot |
PR_Github #20038 [ run ] completed with state |
/bot run |
PR_Github #20055 [ run ] triggered by Bot |
PR_Github #20055 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rubber stamp given other approvals
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
Signed-off-by: Yibin Li <[email protected]>
f85e1aa
to
bed2fa7
Compare
/bot run |
PR_Github #20121 [ run ] triggered by Bot |
PR_Github #20121 [ run ] completed with state |
Summary by CodeRabbit
New Features
Tests
Description
Context logits handling is unchanged. This MR only affects generation logits storage in LlmRequest.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.