-
Notifications
You must be signed in to change notification settings - Fork 1.7k
added AD perf and mem test #6456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Eran Geva <[email protected]> added memory checks Signed-off-by: Eran Geva <[email protected]> changed kv cache test to be hw agnostic Signed-off-by: Eran Geva <[email protected]> added test with backend comparison Signed-off-by: Eran Geva <[email protected]> both tests pass Signed-off-by: Eran Geva <[email protected]> cleanups and refactoring Signed-off-by: Eran Geva <[email protected]> fixed llm_root issue Signed-off-by: Eran Geva <[email protected]> shrunk the model, and fixes Signed-off-by: Eran Geva <[email protected]> fixed trtllm-bench test stability Signed-off-by: Eran Geva <[email protected]> preserved the old test Signed-off-by: Eran Geva <[email protected]> set mem ratio to 0.3, fixed bug in default params Signed-off-by: Eran Geva <[email protected]>
📝 WalkthroughWalkthroughThe TensorRT-LLM benchmarking test suite was extensively refactored. New utilities were introduced for running benchmarks as subprocesses, parsing and validating KV cache memory metrics, and comparing backend performance. The original test was updated for the new config format, and a comprehensive unified test was added for backend and golden performance comparisons. Additionally, the Changes
Sequence Diagram(s)sequenceDiagram
participant TestRunner as Test Function
participant TempDir as Temporary Directory
participant Benchmark as Benchmark Subprocess
participant Parser as Log Parser
participant Validator as Metric Validator
TestRunner->>TempDir: Prepare config & dataset files
TestRunner->>Benchmark: Run benchmark subprocess with config
Benchmark-->>TestRunner: Output logs (stdout, stderr)
TestRunner->>Parser: Parse KV cache metrics from logs
Parser-->>TestRunner: KV cache metrics
TestRunner->>Validator: Calculate expected metrics (PyTorch CUDA APIs)
Validator-->>TestRunner: Expected metric ranges
TestRunner->>Validator: Validate parsed metrics within expected ranges
Validator-->>TestRunner: Validation result
TestRunner->>TestRunner: Compare performance (backend/golden)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (2)
🧰 Additional context used📓 Path-based instructions (2)**/*.py📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
Files:
**/*.{cpp,h,cu,py}📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
Files:
🧠 Learnings (3)📓 Common learnings
tensorrt_llm/llmapi/llm.py (1)Learnt from: moraxu tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)Learnt from: moraxu Learnt from: CR ⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (9)
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
/bot run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (3)
278-279
: Track the extra memory consumption issueGood that you've documented the extra memory consumption with a TODO. This appears to be a known issue that should be investigated.
Would you like me to open a GitHub issue to track investigating why there's 2500MB of extra memory consumption beyond the model size?
448-582
: Consider refactoring this large function for better maintainabilityThis function is over 130 lines long and handles multiple responsibilities. Consider breaking it down into smaller, focused functions.
Suggested refactoring approach:
- Extract the autodeploy benchmark execution into a separate function
- Extract the backend comparison logic into its own function
- Extract the golden comparison logic into its own function
This would make the code more modular and easier to test/maintain.
585-602
: Test doesn't validate benchmark resultsThis test only verifies that the benchmark runs without throwing exceptions. Consider adding assertions to validate the benchmark output.
Add result validation:
dataset_path = prepare_dataset(llm_root, temp_dir, model_name) - run_benchmark(model_name, dataset_path, temp_dir) + result = run_benchmark(model_name, dataset_path, temp_dir, + report_json_path=f"{temp_dir}/report.json") + assert result is not None, "Benchmark should return report data" + assert "performance" in result, "Report should contain performance metrics"
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
(4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile = ...).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL = ...).
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a class in the constructor in Python.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-29T08:25:52.877Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Learnt from: yiqingy0
PR: NVIDIA/TensorRT-LLM#5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)
Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-29T08:25:52.877Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
import os | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Move import to top of file
The os
module should be imported at the top of the file with other imports, not inside the function.
Move this import to the top:
import json
import re
+import os
import subprocess
import tempfile
from pathlib import Path
And remove it from here:
- # Run benchmark as subprocess to capture ALL output
- import os
-
env = os.environ.copy()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import os | |
# At the top of the file, update imports: | |
import json | |
import re | |
import os | |
import subprocess | |
import tempfile | |
from pathlib import Path |
import os | |
# Inside run_benchmark (remove the mid-function import): | |
# Run benchmark as subprocess to capture ALL output | |
env = os.environ.copy() |
🤖 Prompt for AI Agents
In tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
around lines 90 to 91, the import statement for the os module is inside a
function. Move the import statement to the top of the file with the other
imports and remove it from inside the function to follow standard Python import
conventions.
def validate_and_extract_kv_cache_metrics(report_data, free_mem_ratio, require_metrics=True): | ||
""" | ||
Validate and extract KV cache metrics from report. | ||
|
||
Args: | ||
report_data: The benchmark report data | ||
free_mem_ratio: Free memory ratio for calculating expected metrics | ||
require_metrics: If True, fail when metrics are missing. If False, just warn. | ||
|
||
Returns: | ||
Tuple of (kv_cache_metrics, expected_metrics) or (None, None) if validation fails | ||
""" | ||
required_metrics = [ | ||
"current_cache_size", | ||
"free_mem_pre_mb", | ||
"free_mem_post_mb", | ||
"new_cache_size", | ||
] | ||
result = runner.invoke(main, args, catch_exceptions=False) | ||
assert result.exit_code == 0 | ||
|
||
# Extract KV cache metrics | ||
kv_cache_metrics = report_data.get("kv_cache_metrics", {}) | ||
|
||
if not kv_cache_metrics: | ||
message = ( | ||
"KV cache metrics not found! " | ||
"The autodeploy backend must log memory statistics for this test to pass. " | ||
f"Expected metrics: {', '.join(required_metrics)}" | ||
) | ||
if require_metrics: | ||
assert False, f"REQUIRED {message}" | ||
else: | ||
print(f"ℹ️ {message}") | ||
assert False, "KV cache metrics are missing" | ||
|
||
# Check for missing metrics | ||
missing_metrics = [metric for metric in required_metrics if metric not in kv_cache_metrics] | ||
|
||
if missing_metrics: | ||
message = ( | ||
f"Missing required KV cache metrics: {missing_metrics}. " | ||
f"Found metrics: {list(kv_cache_metrics.keys())}. " | ||
f"All of {required_metrics} are required for the test to pass." | ||
) | ||
if require_metrics: | ||
assert False, message | ||
else: | ||
print(f"ℹ️ KV cache validation skipped - {message}") | ||
assert False, "KV cache metrics are missing" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistent error handling logic
The require_metrics
parameter suggests optional validation, but the function always fails with assert False
regardless of this parameter's value. This makes the parameter misleading.
Either make the validation truly optional:
if require_metrics:
assert False, f"REQUIRED {message}"
else:
print(f"ℹ️ {message}")
- assert False, "KV cache metrics are missing"
+ return None, None
Or remove the parameter if validation is always required:
-def validate_and_extract_kv_cache_metrics(report_data, free_mem_ratio, require_metrics=True):
+def validate_and_extract_kv_cache_metrics(report_data, free_mem_ratio):
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
around lines 382 to 430, the function validate_and_extract_kv_cache_metrics uses
the require_metrics parameter to indicate optional validation, but it always
fails with assert False regardless of this flag. To fix this, modify the error
handling so that when require_metrics is False, the function does not assert but
instead logs a warning and returns None or skips validation gracefully.
Alternatively, if validation should always be required, remove the
require_metrics parameter and keep the assert failures for missing metrics.
PR_Github #13389 [ run ] triggered by Bot |
PR_Github #13389 [ run ] completed with state |
Signed-off-by: Eran Geva <[email protected]>
/bot run --reuse-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)
90-91
: Duplicate issue: Move import to top of file.This issue was already identified in previous reviews. The
os
module should be imported at the top of the file with other imports.
382-430
: Duplicate issue: Fix inconsistent error handling logic.This issue was already identified in previous reviews. The
require_metrics
parameter suggests optional validation, but the function always fails withassert False
regardless of this parameter's value (lines 414 and 429).
🧹 Nitpick comments (2)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)
265-311
: Well-designed dynamic metrics calculation.The function properly queries actual GPU memory to calculate realistic expected ranges. The error handling for missing PyTorch/CUDA is appropriate.
Consider addressing the TODO comment about extra memory consumption (line 278-279) to improve accuracy of the estimates.
448-582
: Consider refactoring for better modularity.This function handles the complete benchmarking workflow effectively, but it's quite large and has multiple responsibilities. The logic is sound and error handling is appropriate.
Consider breaking this into smaller functions:
- A function for config file preparation
- A function for benchmark execution and report generation
- A function for metric validation and comparison
This would improve maintainability and testability while preserving the current functionality.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
(4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,h,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-30T06:11:42.362Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Learnt from: yiqingy0
PR: NVIDIA/TensorRT-LLM#5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)
Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-07-30T06:11:42.362Z
Learning: Applies to **/*.py : The code developed for TensorRT-LLM should conform to Python 3.8+.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (7)
1-11
: Import structure looks good.The imports are properly organized at the top of the file and follow standard Python conventions.
13-47
: Well-implemented parsing function.The function properly extracts KV cache metrics using regex patterns, includes appropriate error handling, and provides clear feedback through print statements. The calculation logic for
new_cache_size
follows the expected formula from the resize_kv_cache implementation.
138-184
: Solid performance comparison logic.The function correctly implements performance comparison with both relative and absolute tolerances. The logic properly handles cases where the autodeploy backend performs better, and provides comprehensive error messages for debugging.
187-227
: Good performance validation logic.The function correctly implements golden reference comparison, allowing performance improvements to pass unconditionally while enforcing tolerances for regressions. The error messages provide all necessary debugging information.
314-366
: Comprehensive validation logic.The function implements thorough validation of KV cache metrics with appropriate tolerances for different metric types. The validation includes both absolute values and relationships between metrics.
585-601
: Good simplification of the test function.The updated function properly uses the new config file format and leverages the new utility functions. The changes align well with the overall refactoring effort.
604-606
: Clean test implementation.The new test function provides a clear entry point for backend comparison testing and properly delegates to the unified comparison function.
PR_Github #13536 [ run ] triggered by Bot |
PR_Github #13536 [ run ] completed with state |
Signed-off-by: Eran Geva <[email protected]>
/bot run --stage-list "A30-PyTorch-2" |
PR_Github #13552 [ run ] triggered by Bot |
PR_Github #13552 [ run ] completed with state |
added memory checks
changed kv cache test to be hw agnostic
added test with backend comparison
both tests pass
cleanups and refactoring
fixed llm_root issue
shrunk the model, and fixes
fixed trtllm-bench test stability
preserved the old test
set mem ratio to 0.3, fixed bug in default params
Summary by CodeRabbit
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.