-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat: Add support for disaggregation with pp with pytorch backend #6369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThis change set introduces enhanced support for pipeline and tensor parallelism configurations in disaggregated serving, updates internal resource management for KV cache heads, and improves integration test coverage for various parallelism scenarios. Additional error handling and logging are added, and several new YAML test configurations and integration test functions are introduced. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant TestSuite
participant LaunchDisaggLLM
participant ContextServer
participant GenerationServer
User->>TestSuite: Run parameterized test (with ctx_pp, ctx_tp, gen_pp, gen_tp)
TestSuite->>LaunchDisaggLLM: Launch servers with configs
LaunchDisaggLLM->>ContextServer: Start with ctx_pp, ctx_tp
LaunchDisaggLLM->>GenerationServer: Start with gen_pp, gen_tp
ContextServer-->>LaunchDisaggLLM: Ready
GenerationServer-->>LaunchDisaggLLM: Ready
TestSuite->>ContextServer: Send test requests
ContextServer->>GenerationServer: Transfer KV cache (if needed)
GenerationServer-->>TestSuite: Return generation results
TestSuite-->>User: Report test outcome
sequenceDiagram
participant ExecutorLoop
participant KVCacheTransceiver
participant ResourceManager
ExecutorLoop->>KVCacheTransceiver: Check transfer status (if enabled)
ExecutorLoop->>ResourceManager: Prepare resources for scheduled requests
ExecutorLoop->>KVCacheTransceiver: Prepare gen requests after transfer
ExecutorLoop->>KVCacheTransceiver: Send context cache asynchronously (if needed)
KVCacheTransceiver-->>ExecutorLoop: Status/confirmation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Suggested labels
Suggested reviewers
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
682-682
: Fix line length violationLine exceeds the 120 character limit specified in the coding guidelines.
- logger.warning( - "num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, may not have enough kvCache" - ) + logger.warning( + "num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, " + "may not have enough kvCache" + )
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
(1 hunks)examples/disaggregated/disagg_config.yaml
(1 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py
(7 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,h,hpp,cc,cxx}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.{cpp,h,hpp,cc,cxx}
: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #defines whenever possible, as the latter are not visible to the compiler.
A variable that is not modified after its initialization should be declared as const.
Except 0, nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces in C++ code.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with first letter lowercase (e.g., thisIsAFilename.cpp), and must be case-insensitive unique within a compilation target.
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces should use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal).
Locally visible static variable should use camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;).
Class member variables should use camel case prefixed with an 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope, and function-scope magic-number/literal constants are uppercase snakecase with prefix...
Files:
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
682-682: Line too long (123 > 120)
(E501)
833-833: Local variable ctx_transmission_reqs
is assigned to but never used
Remove assignment to unused variable ctx_transmission_reqs
(F841)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (6)
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp (1)
815-829
: Good enhancement to layer compatibility validationThe updated logic correctly accounts for pipeline parallelism when comparing layer configurations between
selfConfig
anddestConfig
. This is a necessary improvement for disaggregated serving scenarios where the total effective layers (layers × pipeline parallelism) matter more than just the raw layer count.tensorrt_llm/_torch/pyexecutor/py_executor.py (5)
123-126
: LGTM: Addition of scheduled_ctx_reqs fieldThe new field properly tracks scheduled context requests in the pipeline parallel batch state, which is necessary for the disaggregated context cache handling.
660-662
: Appropriate integration of KV cache transceiver checkThe conditional check for disaggregated generation transfer status is correctly placed early in the executor loop to ensure timely status updates.
671-690
: Well-structured handling of disaggregated generation requestsThe code properly unpacks the new scheduling outputs and handles the case when no fitting requests are found. The warning message provides helpful debugging information.
726-730
: Important sorting logic for disaggregated servingThe sorting of generation requests to place those without batch_idx first is critical for the assumptions in model_engine.py. The comment clearly explains why this is necessary.
832-836
: Correct async context cache transmissionThe code properly sends disaggregated context cache when the KV cache transceiver is enabled and there are scheduled context requests. The static analysis warning about
ctx_transmission_reqs
being unused is a false positive - it's used within the conditional block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (2)
158-161
: Respectlayer_mask
when generating the total listFor the constant-heads case every layer—including those disabled by
layer_mask
—receives a non-zero entry.
Setting masked-out layers to0
would better reflect their absence and avoid inflatingtotal_num_kv_heads_per_layer
.
165-184
: Factor out duplicated list-population logic
append_to_kv_heads_per_layer
is useful, but running two almost identical loops could be condensed into one parameterised helper to keep the constructor shorter.tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
674-690
: Fix line length violation and improve warning message.The logic for handling KV cache transceiver scenarios is correct, but there are minor improvements needed:
- Line 682 exceeds the 120-character limit (flagged by static analysis)
- The warning message could be more descriptive
Apply this diff to fix the line length and improve the warning:
- logger.warning( - "num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, may not have enough kvCache" - ) + logger.warning( + "num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, " + "may not have enough kvCache" + )
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
(1 hunks)tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py
(1 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py
(7 hunks)tensorrt_llm/_torch/pyexecutor/resource_manager.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py
tensorrt_llm/_torch/pyexecutor/resource_manager.py
tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py
tensorrt_llm/_torch/pyexecutor/resource_manager.py
tensorrt_llm/_torch/pyexecutor/py_executor.py
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
682-682: Line too long (123 > 120)
(E501)
🔇 Additional comments (8)
tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py (1)
98-106
: Confirm C++ expects global‐layer head counts
total_num_kv_heads_per_layer
now spans all layers, not just local PP layers.
If the underlyingCacheTransceiverCpp
still indexes only the local subset, this will mis-align layer→pool offsets and blow up memory. Please double-check the C++ side or add an assertion verifyinglen(total_num_kv_heads_per_layer) == world_config.total_num_layers_expected_by_cpp
.tensorrt_llm/_torch/pyexecutor/py_executor.py (7)
125-125
: LGTM! Field addition follows dataclass conventions.The new
scheduled_ctx_reqs
field is properly typed and has an appropriate default value for tracking scheduled context requests in pipeline parallel execution.
660-662
: LGTM! Consistent with existing executor loop patterns.The KV cache transceiver status check is appropriately placed and follows the same pattern used in other executor loop methods.
671-673
: LGTM! Correctly handles enhanced scheduler output.The unpacking of additional return values from the scheduler is consistent with the enhanced functionality for disaggregated generation initialization.
715-735
: LGTM! Well-documented disaggregated serving logic.The generation request preparation and sorting logic is correctly implemented with clear comments explaining the importance of request ordering for disaggregated serving. The stable sort ensures consistent behavior.
766-766
: LGTM! Correctly populates the new field.The
scheduled_ctx_reqs
field is properly populated with the scheduled context requests for later use in the pipeline.
832-836
: LGTM! Correctly handles context cache transmission.The logic appropriately sends disaggregated context cache when the KV cache transceiver is enabled and there are scheduled context requests from the previous batch.
845-847
: LGTM! Provides necessary cleanup for transmitted context requests.The logic correctly handles termination of context requests that have finished transmission, with appropriate conditional checks to ensure it only runs when the KV cache transceiver is enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
674-690
: Fix line length violation and approve the logic structureThe conditional logic for disaggregated generation handling is well-structured with appropriate resource preparation and fallback behavior. However, line 682 exceeds the 120-character limit.
Apply this diff to fix the line length:
- logger.warning( - "num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, may not have enough kvCache" - ) + logger.warning( + "num_fitting_reqs=0 and fitting_disagg_gen_init_requests is empty, " + "may not have enough kvCache" + )
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py
(8 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
682-682: Line too long (123 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (10)
tensorrt_llm/_torch/pyexecutor/py_executor.py (10)
125-125
: LGTM: Clean dataclass field additionThe new
scheduled_ctx_reqs
field properly tracks scheduled context requests for disaggregated generation support. The type annotation and default value are appropriate.
660-662
: LGTM: Well-placed transfer status checkChecking disaggregated generation transfer status early in the executor loop is appropriate for updating request states before scheduling decisions. The conditional guard ensures this only runs when needed.
671-673
: LGTM: Proper handling of extended scheduler interfaceThe modified
_schedule
call correctly unpacks the additional return values for disaggregated generation support. The variable names are descriptive and align with the intended functionality.
705-705
: LGTM: Proper condition adjustment for transceiver supportThe addition of
and not self.kv_cache_transceiver
correctly handles cases where no requests can be scheduled due to ongoing KV cache transfers, while maintaining the safety assertion for non-transceiver scenarios.
715-719
: LGTM: Well-placed transmission completion handlingThe preparation of generation requests that have completed KV cache transfer is properly placed in the execution flow and appropriately guarded by the transceiver check.
722-731
: LGTM: Critical sorting for disaggregated servingThe stable sorting of generation requests based on
batch_idx
availability is essential for proper disaggregated serving functionality. The detailed comment explains the reasoning well, and the implementation correctly places requests withoutbatch_idx
first.
732-735
: LGTM: Proper first token response handlingThe first token response handling for the KV cache transceiver is appropriately placed and ensures timely client responses during disaggregated generation.
766-766
: LGTM: Correct BatchStatePP instantiationThe addition of
scheduled_ctx_reqs=scheduled_batch.context_requests
properly initializes the new dataclass field with the appropriate context requests.
832-836
: LGTM: Well-integrated context cache transmissionThe disaggregated context cache sending is properly placed in the batch finalization stage and correctly handles the conditional logic for when scheduled context requests are available.
845-847
: LGTM: Proper cleanup of finished context requestsThe termination of finished context requests is appropriately placed and ensures proper cleanup of resources for contexts that have completed transmission.
821d063
to
d480ab6
Compare
/bot run |
PR_Github #13204 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
scripts/build_wheel.py (1)
68-77
: Consider adding error handling for directory removal as well.Good addition of error handling for file removal. However, for consistency and completeness, consider adding similar error handling for the
rmtree
call on line 72. Directories can also fail to be removed due to permissions or being in use.Apply this diff to add error handling for directory removal:
def clear_folder(folder_path): for item in os.listdir(folder_path): item_path = os.path.join(folder_path, item) if os.path.isdir(item_path) and not os.path.islink(item_path): - rmtree(item_path) + try: + rmtree(item_path) + except (OSError, IOError) as e: + print(f"Failed to remove directory {item_path}: {e}", file=sys.stderr) else: try: os.remove(item_path) except (OSError, IOError) as e: print(f"Failed to remove {item_path}: {e}", file=sys.stderr)tests/integration/defs/disaggregated/test_disaggregated.py (1)
553-651
: Consider refactoring to reduce code duplication.While these test functions follow the existing pattern in the file, there's significant code duplication. The symlink setup code is identical across all five functions.
Consider extracting the common setup logic into a helper function:
+def _setup_model_symlink(llm_venv, llama_model_root): + """Helper to set up model symlinks for disaggregated tests.""" + src_dst_dict = { + llama_model_root: + f"{llm_venv.get_working_directory()}/TinyLlama/TinyLlama-1.1B-Chat-v1.0", + } + for src, dst in src_dst_dict.items(): + if not os.path.islink(dst): + os.makedirs(os.path.dirname(dst), exist_ok=True) + os.symlink(src, dst, target_is_directory=True) @pytest.mark.skip_less_device(4) @pytest.mark.parametrize("llama_model_root", ['TinyLlama-1.1B-Chat-v1.0'], indirect=True) def test_disaggregated_ctxpp2_genpp2(disaggregated_test_root, llm_venv, disaggregated_example_root, llama_model_root): - src_dst_dict = { - llama_model_root: - f"{llm_venv.get_working_directory()}/TinyLlama/TinyLlama-1.1B-Chat-v1.0", - } - for src, dst in src_dst_dict.items(): - if not os.path.islink(dst): - os.makedirs(os.path.dirname(dst), exist_ok=True) - os.symlink(src, dst, target_is_directory=True) + _setup_model_symlink(llm_venv, llama_model_root) run_disaggregated_test(disaggregated_example_root, "ctxpp2_genpp2", env=llm_venv._new_env, cwd=llm_venv.get_working_directory())This would reduce maintenance overhead while preserving the existing test structure.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
(1 hunks)scripts/build_wheel.py
(1 hunks)tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py
(1 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py
(9 hunks)tensorrt_llm/_torch/pyexecutor/resource_manager.py
(1 hunks)tests/integration/defs/accuracy/accuracy_core.py
(1 hunks)tests/integration/defs/accuracy/test_disaggregated_serving.py
(4 hunks)tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp2_genpp2.yaml
(1 hunks)tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp2_gentp2.yaml
(1 hunks)tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp4_genpp4.yaml
(1 hunks)tests/integration/defs/disaggregated/test_configs/disagg_config_ctxtp2_genpp2.yaml
(1 hunks)tests/integration/defs/disaggregated/test_configs/disagg_config_ctxtp2pp2_gentp2pp2.yaml
(1 hunks)tests/integration/defs/disaggregated/test_disaggregated.py
(2 hunks)
✅ Files skipped from review due to trivial changes (5)
- tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp2_genpp2.yaml
- tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp2_gentp2.yaml
- tests/integration/defs/disaggregated/test_configs/disagg_config_ctxtp2pp2_gentp2pp2.yaml
- tests/integration/defs/disaggregated/test_configs/disagg_config_ctxtp2_genpp2.yaml
- tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp4_genpp4.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py
- tensorrt_llm/_torch/pyexecutor/resource_manager.py
- cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
🧰 Additional context used
🧠 Learnings (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache()
and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Learnt from: yechank-nvidia
PR: #6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using from_shared_tensor()
is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call strip_for_generation()
to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
🧬 Code Graph Analysis (1)
tests/integration/defs/disaggregated/test_disaggregated.py (1)
tests/integration/defs/conftest.py (4)
disaggregated_test_root
(2335-2340)llm_venv
(707-723)disaggregated_example_root
(270-275)llama_model_root
(964-1039)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
682-682: Line too long (123 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (9)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
661-663
: Well-structured disaggregated serving integration.The KV cache transceiver integration is well-implemented with proper lifecycle management:
- Early status checking for generation transfers
- Resource preparation for disaggregated generation initialization
- Proper handling of context cache sending and termination
- Clear separation of concerns with backward compatibility
The changes maintain the existing control flow while adding the necessary hooks for disaggregated serving.
Also applies to: 675-679, 715-719, 732-735, 832-835, 844-846
tests/integration/defs/disaggregated/test_disaggregated.py (1)
62-71
: LGTM! Well-structured parallelism configuration mappings.The new configuration entries properly map parallelism scenarios to their required rank counts and config files. The naming convention clearly indicates the parallelism setup for context and generation servers.
tests/integration/defs/accuracy/test_disaggregated_serving.py (7)
23-27
: LGTM! Import additions support new test functionality.The new imports are properly used in the added test methods and helper functions.
77-81
: Good deprecation practice with clear guidance.The warning appropriately notifies users about the preferred approach of using server-specific configurations instead of the unified parameter.
99-131
: Excellent implementation of explicit parallelism configuration.The changes properly extract parallelism parameters from server configs, calculate total GPU requirements, and assign non-overlapping CUDA device ranges. The explicit
--tp_size
and--pp_size
arguments provide clear configuration.
337-384
: Well-designed helper method for parallelism testing.The method properly validates resource requirements, constructs consistent server configurations, and provides a clean interface for parallelism testing scenarios. The upfront device count check prevents resource-related test failures.
385-390
: Good coverage of symmetric parallelism scenarios.The test method covers important symmetric configurations with clear parameterization and meaningful test IDs.
392-398
: Excellent coverage of realistic asymmetric parallelism scenarios.The test focuses on practically relevant configurations where context servers use pipeline parallelism (optimal for prefill) and generation servers use tensor parallelism (optimal for decode). This aligns with real-world usage patterns.
62-71
: All referenced YAML config files are present and correctly structuredI’ve confirmed that each of the following files exists under
tests/integration/defs/disaggregated/test_configs/
and follows the expected schema (model, hostname, ports, backend, GPU settings, context_servers, generation_servers):
- disagg_config_ctxpp2_genpp2.yaml
- disagg_config_ctxtp2_genpp2.yaml
- disagg_config_ctxpp2_gentp2.yaml
- disagg_config_ctxtp2pp2_gentp2pp2.yaml
- disagg_config_ctxpp4_genpp4.yaml
No further changes needed here.
PR_Github #13204 [ run ] completed with state |
/bot run |
PR_Github #13220 [ run ] triggered by Bot |
PR_Github #13220 [ run ] completed with state |
@Tabrizian @Shixiaowei02 could you have a look when you have a chance? Thanks. |
/bot run |
PR_Github #13235 [ run ] triggered by Bot |
PR_Github #13235 [ run ] completed with state |
I have a commit on my branch that addresses the feedback. Despite accepting the invitation to @pcastonguay's branch, there may some permission issue or otherwise because I cannot push upstream to his branch. Perhaps the best is for @pcastonguay to cherry-pick my commit? |
I cherry-picked your changes |
1 similar comment
I cherry-picked your changes |
afa52cf
to
26cf851
Compare
PR_Github #13505 [ run ] triggered by Bot |
PR_Github #13505 [ run ] completed with state |
/bot skip --comment "All tests passed after running manually" |
PR_Github #13541 [ skip ] triggered by Bot |
PR_Github #13541 [ skip ] completed with state |
…IDIA#6369) Signed-off-by: Patrice Castonguay <[email protected]> Signed-off-by: raayandhar <[email protected]> Signed-off-by: Lizhi Zhou <[email protected]> Signed-off-by: pcastonguay <[email protected]> Co-authored-by: raayandhar <[email protected]> Co-authored-by: Lizhi Zhou <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: Lanyu Liao <[email protected]>
…IDIA#6369) Signed-off-by: Patrice Castonguay <[email protected]> Signed-off-by: raayandhar <[email protected]> Signed-off-by: Lizhi Zhou <[email protected]> Signed-off-by: pcastonguay <[email protected]> Co-authored-by: raayandhar <[email protected]> Co-authored-by: Lizhi Zhou <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Patrice Castonguay <[email protected]> Signed-off-by: raayandhar <[email protected]> Signed-off-by: Lizhi Zhou <[email protected]> Signed-off-by: pcastonguay <[email protected]> Co-authored-by: raayandhar <[email protected]> Co-authored-by: Lizhi Zhou <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…IDIA#6369) Signed-off-by: Patrice Castonguay <[email protected]> Signed-off-by: raayandhar <[email protected]> Signed-off-by: Lizhi Zhou <[email protected]> Signed-off-by: pcastonguay <[email protected]> Co-authored-by: raayandhar <[email protected]> Co-authored-by: Lizhi Zhou <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores
Refactor
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.