Skip to content

Conversation

MrGeva
Copy link
Collaborator

@MrGeva MrGeva commented Jul 29, 2025

added memory checks

changed kv cache test to be hw agnostic

added test with backend comparison

both tests pass

cleanups and refactoring

fixed llm_root issue

shrunk the model, and fixes

fixed trtllm-bench test stability

preserved the old test

set mem ratio to 0.3, fixed bug in default params

Summary by CodeRabbit

  • Tests
    • Enhanced and refactored the TensorRT-LLM benchmarking test suite for more comprehensive validation.
    • Added detailed parsing and validation of GPU memory metrics and performance results.
    • Introduced unified tests for backend and golden value performance comparisons.
    • Improved test reporting with dynamic configuration and detailed metric checks.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Eran Geva <[email protected]>

added memory checks

Signed-off-by: Eran Geva <[email protected]>

changed kv cache test to be hw agnostic

Signed-off-by: Eran Geva <[email protected]>

added test with backend comparison

Signed-off-by: Eran Geva <[email protected]>

both tests pass

Signed-off-by: Eran Geva <[email protected]>

cleanups and refactoring

Signed-off-by: Eran Geva <[email protected]>

fixed llm_root issue

Signed-off-by: Eran Geva <[email protected]>

shrunk the model, and fixes

Signed-off-by: Eran Geva <[email protected]>

fixed trtllm-bench test stability

Signed-off-by: Eran Geva <[email protected]>

preserved the old test

Signed-off-by: Eran Geva <[email protected]>

set mem ratio to 0.3, fixed bug in default params

Signed-off-by: Eran Geva <[email protected]>
Copy link
Contributor

coderabbitai bot commented Jul 29, 2025

📝 Walkthrough

Walkthrough

The TensorRT-LLM benchmarking test suite was extensively refactored. New utilities were introduced for running benchmarks as subprocesses, parsing and validating KV cache memory metrics, and comparing backend performance. The original test was updated for the new config format, and a comprehensive unified test was added for backend and golden performance comparisons. Additionally, the _TorchLLM._build_model method was modified to remove the fail_fast_on_attention_window_too_large argument from the executor configuration.

Changes

Cohort / File(s) Change Summary
Benchmark Test Refactor & Extensions
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
- Replaced simple benchmark invocation with run_benchmark to run benchmark as a subprocess and capture logs.
- Added parse_kv_cache_metrics to extract KV cache memory metrics from logs.
- Introduced calculate_expected_kv_cache_metrics for dynamic expected metric estimation using PyTorch CUDA APIs.
- Added validate_kv_cache_metrics_dynamic to validate parsed metrics against expected ranges.
- Added performance comparison utilities: compare_backends_performance, assert_performance_within_tolerance.
- Added unified test trtllm_bench_unified_comparison for backend and golden performance comparisons.
- Updated original test to write new config file extra_llm_api_options.yaml and removed max_batch_size key.
- Added new test test_trtllm_bench_backend_comparison.
- Multiple helper functions added for parsing, validation, and reporting.
Executor Configuration Update
tensorrt_llm/llmapi/llm.py
- Removed the fail_fast_on_attention_window_too_large argument from the tllm.ExecutorConfig constructor call in _TorchLLM._build_model.
- Commented out related lines that previously set this argument from self.args. No other logic changes were made.

Sequence Diagram(s)

sequenceDiagram
    participant TestRunner as Test Function
    participant TempDir as Temporary Directory
    participant Benchmark as Benchmark Subprocess
    participant Parser as Log Parser
    participant Validator as Metric Validator

    TestRunner->>TempDir: Prepare config & dataset files
    TestRunner->>Benchmark: Run benchmark subprocess with config
    Benchmark-->>TestRunner: Output logs (stdout, stderr)
    TestRunner->>Parser: Parse KV cache metrics from logs
    Parser-->>TestRunner: KV cache metrics
    TestRunner->>Validator: Calculate expected metrics (PyTorch CUDA APIs)
    Validator-->>TestRunner: Expected metric ranges
    TestRunner->>Validator: Validate parsed metrics within expected ranges
    Validator-->>TestRunner: Validation result
    TestRunner->>TestRunner: Compare performance (backend/golden)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • pcastonguay
  • litaotju
  • nv-guomingz
  • Funatiq
  • Superjomn

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4daae98 and 97d4b18.

📒 Files selected for processing (2)
  • tensorrt_llm/llmapi/llm.py (1 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/llmapi/llm.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/llmapi/llm.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-30T06:11:42.362Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Learnt from: yiqingy0
PR: NVIDIA/TensorRT-LLM#5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.
tensorrt_llm/llmapi/llm.py (1)

Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)

Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-30T06:11:42.362Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (9)
tensorrt_llm/llmapi/llm.py (1)

935-936: Verify the impact of removing fail_fast_on_attention_window_too_large parameter.

The fail_fast_on_attention_window_too_large parameter has been commented out from the ExecutorConfig constructor. This change will cause the executor to use its default behavior instead of the explicitly configured value.

Ensure this change aligns with the PR's goal of making tests hardware agnostic and doesn't introduce unintended side effects in error handling.

tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (8)

13-47: LGTM! Well-structured KV cache metrics parsing.

The function effectively uses regex patterns to extract metrics from log output and includes clear error handling for missing metrics. The new_cache_size calculation follows the documented formula from resize_kv_cache.


50-89: LGTM! Robust benchmark execution with proper error handling.

The function effectively builds command-line arguments dynamically based on backend type and includes comprehensive error handling. The subprocess execution and output capture are implemented correctly.

Also applies to: 92-135


138-227: LGTM! Well-designed performance comparison utilities.

Both functions implement robust tolerance checking with relative and absolute thresholds. The logic correctly allows performance improvements to pass unconditionally while failing only when performance degrades beyond acceptable limits.


265-367: LGTM! Comprehensive GPU memory analysis and validation.

The functions effectively use PyTorch CUDA APIs to determine expected memory ranges and validate KV cache metrics with appropriate tolerances. Error handling for missing dependencies is properly implemented.


369-381: LGTM! Clean metric extraction and formatting utilities.

The extract_performance_metric and print_kv_cache_metrics functions are well-implemented with clear validation and formatting logic.

Also applies to: 431-446


448-582: LGTM! Comprehensive unified test function with flexible comparison modes.

The function effectively supports both backend and golden comparison modes with proper parameter configuration, YAML config generation, benchmark execution, and validation. The modular design makes it reusable for different testing scenarios.


585-606: LGTM! Clean test functions with updated configuration format.

The tests properly use the new extra_llm_api_options.yaml configuration format and the new backend comparison test provides valuable additional coverage for performance validation.


1-607: Excellent comprehensive test suite for TensorRT-LLM benchmarking.

This file introduces a robust framework for:

  • KV cache memory metrics parsing and validation
  • Performance comparison between backends
  • Hardware-agnostic testing with dynamic GPU memory analysis
  • Unified test interface supporting multiple comparison modes

The implementation demonstrates good software engineering practices with proper error handling, modular design, and comprehensive validation. This aligns well with the PR objectives of enhancing testing coverage for performance and memory aspects.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MrGeva
Copy link
Collaborator Author

MrGeva commented Jul 29, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (3)

278-279: Track the extra memory consumption issue

Good that you've documented the extra memory consumption with a TODO. This appears to be a known issue that should be investigated.

Would you like me to open a GitHub issue to track investigating why there's 2500MB of extra memory consumption beyond the model size?


448-582: Consider refactoring this large function for better maintainability

This function is over 130 lines long and handles multiple responsibilities. Consider breaking it down into smaller, focused functions.

Suggested refactoring approach:

  1. Extract the autodeploy benchmark execution into a separate function
  2. Extract the backend comparison logic into its own function
  3. Extract the golden comparison logic into its own function

This would make the code more modular and easier to test/maintain.


585-602: Test doesn't validate benchmark results

This test only verifies that the benchmark runs without throwing exceptions. Consider adding assertions to validate the benchmark output.

Add result validation:

         dataset_path = prepare_dataset(llm_root, temp_dir, model_name)
-        run_benchmark(model_name, dataset_path, temp_dir)
+        result = run_benchmark(model_name, dataset_path, temp_dir, 
+                               report_json_path=f"{temp_dir}/report.json")
+        assert result is not None, "Benchmark should return report data"
+        assert "performance" in result, "Report should contain performance metrics"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7231134 and dd25bc0.

📒 Files selected for processing (1)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile = ...).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL = ...).
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a class in the constructor in Python.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-29T08:25:52.877Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Learnt from: yiqingy0
PR: NVIDIA/TensorRT-LLM#5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)

Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-29T08:25:52.877Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

Comment on lines +90 to +91
import os

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Move import to top of file

The os module should be imported at the top of the file with other imports, not inside the function.

Move this import to the top:

 import json
 import re
+import os
 import subprocess
 import tempfile
 from pathlib import Path

And remove it from here:

-    # Run benchmark as subprocess to capture ALL output
-    import os
-
     env = os.environ.copy()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import os
# At the top of the file, update imports:
import json
import re
import os
import subprocess
import tempfile
from pathlib import Path
Suggested change
import os
# Inside run_benchmark (remove the mid-function import):
# Run benchmark as subprocess to capture ALL output
env = os.environ.copy()
🤖 Prompt for AI Agents
In tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
around lines 90 to 91, the import statement for the os module is inside a
function. Move the import statement to the top of the file with the other
imports and remove it from inside the function to follow standard Python import
conventions.

Comment on lines +382 to +430
def validate_and_extract_kv_cache_metrics(report_data, free_mem_ratio, require_metrics=True):
"""
Validate and extract KV cache metrics from report.

Args:
report_data: The benchmark report data
free_mem_ratio: Free memory ratio for calculating expected metrics
require_metrics: If True, fail when metrics are missing. If False, just warn.

Returns:
Tuple of (kv_cache_metrics, expected_metrics) or (None, None) if validation fails
"""
required_metrics = [
"current_cache_size",
"free_mem_pre_mb",
"free_mem_post_mb",
"new_cache_size",
]
result = runner.invoke(main, args, catch_exceptions=False)
assert result.exit_code == 0

# Extract KV cache metrics
kv_cache_metrics = report_data.get("kv_cache_metrics", {})

if not kv_cache_metrics:
message = (
"KV cache metrics not found! "
"The autodeploy backend must log memory statistics for this test to pass. "
f"Expected metrics: {', '.join(required_metrics)}"
)
if require_metrics:
assert False, f"REQUIRED {message}"
else:
print(f"ℹ️ {message}")
assert False, "KV cache metrics are missing"

# Check for missing metrics
missing_metrics = [metric for metric in required_metrics if metric not in kv_cache_metrics]

if missing_metrics:
message = (
f"Missing required KV cache metrics: {missing_metrics}. "
f"Found metrics: {list(kv_cache_metrics.keys())}. "
f"All of {required_metrics} are required for the test to pass."
)
if require_metrics:
assert False, message
else:
print(f"ℹ️ KV cache validation skipped - {message}")
assert False, "KV cache metrics are missing"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistent error handling logic

The require_metrics parameter suggests optional validation, but the function always fails with assert False regardless of this parameter's value. This makes the parameter misleading.

Either make the validation truly optional:

         if require_metrics:
             assert False, f"REQUIRED {message}"
         else:
             print(f"ℹ️ {message}")
-            assert False, "KV cache metrics are missing"
+            return None, None

Or remove the parameter if validation is always required:

-def validate_and_extract_kv_cache_metrics(report_data, free_mem_ratio, require_metrics=True):
+def validate_and_extract_kv_cache_metrics(report_data, free_mem_ratio):

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
around lines 382 to 430, the function validate_and_extract_kv_cache_metrics uses
the require_metrics parameter to indicate optional validation, but it always
fails with assert False regardless of this flag. To fix this, modify the error
handling so that when require_metrics is False, the function does not assert but
instead logs a warning and returns None or skips validation gracefully.
Alternatively, if validation should always be required, remove the
require_metrics parameter and keep the assert failures for missing metrics.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13389 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13389 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10014 completed with status: 'FAILURE'

@MrGeva
Copy link
Collaborator Author

MrGeva commented Jul 30, 2025

/bot run --reuse-test

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)

90-91: Duplicate issue: Move import to top of file.

This issue was already identified in previous reviews. The os module should be imported at the top of the file with other imports.


382-430: Duplicate issue: Fix inconsistent error handling logic.

This issue was already identified in previous reviews. The require_metrics parameter suggests optional validation, but the function always fails with assert False regardless of this parameter's value (lines 414 and 429).

🧹 Nitpick comments (2)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)

265-311: Well-designed dynamic metrics calculation.

The function properly queries actual GPU memory to calculate realistic expected ranges. The error handling for missing PyTorch/CUDA is appropriate.

Consider addressing the TODO comment about extra memory consumption (line 278-279) to improve accuracy of the estimates.


448-582: Consider refactoring for better modularity.

This function handles the complete benchmarking workflow effectively, but it's quite large and has multiple responsibilities. The logic is sound and error handling is appropriate.

Consider breaking this into smaller functions:

  • A function for config file preparation
  • A function for benchmark execution and report generation
  • A function for metric validation and comparison

This would improve maintainability and testability while preserving the current functionality.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dd25bc0 and 4daae98.

📒 Files selected for processing (1)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-30T06:11:42.362Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Learnt from: yiqingy0
PR: NVIDIA/TensorRT-LLM#5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)

Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-30T06:11:42.362Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (7)

1-11: Import structure looks good.

The imports are properly organized at the top of the file and follow standard Python conventions.


13-47: Well-implemented parsing function.

The function properly extracts KV cache metrics using regex patterns, includes appropriate error handling, and provides clear feedback through print statements. The calculation logic for new_cache_size follows the expected formula from the resize_kv_cache implementation.


138-184: Solid performance comparison logic.

The function correctly implements performance comparison with both relative and absolute tolerances. The logic properly handles cases where the autodeploy backend performs better, and provides comprehensive error messages for debugging.


187-227: Good performance validation logic.

The function correctly implements golden reference comparison, allowing performance improvements to pass unconditionally while enforcing tolerances for regressions. The error messages provide all necessary debugging information.


314-366: Comprehensive validation logic.

The function implements thorough validation of KV cache metrics with appropriate tolerances for different metric types. The validation includes both absolute values and relationships between metrics.


585-601: Good simplification of the test function.

The updated function properly uses the new config file format and leverages the new utility functions. The changes align well with the overall refactoring effort.


604-606: Clean test implementation.

The new test function provides a clear entry point for backend comparison testing and properly delegates to the unified comparison function.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13536 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13536 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10145 completed with status: 'FAILURE'

Signed-off-by: Eran Geva <[email protected]>
@MrGeva MrGeva requested a review from a team as a code owner July 30, 2025 14:55
@MrGeva MrGeva requested a review from juney-nvidia July 30, 2025 14:55
@MrGeva
Copy link
Collaborator Author

MrGeva commented Jul 30, 2025

/bot run --stage-list "A30-PyTorch-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13552 [ run ] triggered by Bot

@Funatiq Funatiq removed their request for review July 30, 2025 15:06
@tensorrt-cicd
Copy link
Collaborator

PR_Github #13552 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10157 (Partly Tested) completed with status: 'FAILURE'

@MrGeva MrGeva closed this Jul 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants