Skip to content

Conversation

jiaganc
Copy link
Collaborator

@jiaganc jiaganc commented Aug 19, 2025

Summary by CodeRabbit

  • Bug Fixes
    • For PyTorch backend, the CUDA graph configuration now auto-sets a max batch size from settings when not provided, improving stability and preventing misconfiguration.
  • Chores
    • Internal handling adjusted to apply backend-specific defaults after processing extra options. No changes to public APIs.

Description

In 0.21.0, cuda_graph_max_batch_size is a field in TorchLlmArgs. When it's not specified by user, trtllm-bench will use the value from get_settings, which use runtime max_batch_size as the default value.

However, in 1.0.0, CUDA graph configs are move to CudaGraphConfig, and all the configs from get_settings will be overwritten by configs from extra_llm_api_options. In the bug's case, cuda_graph_config.max_batch_size is set to None. Therefore, validate_cuda_graph_config can only use the default value 128 to generate the batch size list. Lacking CUDA graphs larger than 128 causes the perfermance regression.

This PR makes trtllm-bench use runtime max_batch_size when cuda_graph_config.max_batch_size is not provided.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@jiaganc jiaganc requested a review from a team as a code owner August 19, 2025 08:57
Copy link
Contributor

coderabbitai bot commented Aug 19, 2025

📝 Walkthrough

Walkthrough

Adds PyTorch-specific post-processing in get_llm_args: after applying extra LLM API options, it adjusts cuda_graph_config for the PyTorch backend, defaulting cuda_graph_config.max_batch_size from settings_config.max_batch_size when not otherwise specified, and returns the updated args.

Changes

Cohort / File(s) Summary
PyTorch-specific cuda_graph_config handling
tensorrt_llm/bench/dataclasses/configuration.py
After merging extra options, for backend=="pytorch" it extracts/updates cuda_graph_config and sets max_batch_size from settings_config when batch_sizes and max_batch_size are unset; returns the updated dict instead of directly returning the merge result.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant Configuration as Configuration.get_llm_args
  participant Utils as update_llm_args_with_extra_options

  Caller->>Configuration: get_llm_args()
  Configuration->>Utils: merge llm_args with extra_llm_api_options
  Utils-->>Configuration: updated_llm_args

  alt backend == "pytorch"
    Note over Configuration: Extract cuda_graph_config
    Configuration->>Configuration: if no batch_sizes/max_batch_size<br/>set cuda_graph_config.max_batch_size = settings_config.max_batch_size
    Configuration->>Configuration: write back cuda_graph_config
  else backend != "pytorch"
    Note over Configuration: No additional processing
  end

  Configuration-->>Caller: return updated_llm_args
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • pcastonguay
  • achartier
  • amitz-nv

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/llmapi/llm_args.py (1)

150-153: Off-by-one omits power-of-two 256 (and others) when max_batch_size isn’t a power-of-two

The intent says “Add powers of 2 up to max_batch_size,” but the current range excludes the endpoint. For example, with max_batch_size=300, 256 should be included but isn’t. Fix by making the upper bound inclusive.

Apply this diff:

-        # Add powers of 2 up to max_batch_size
-        batch_sizes += [
-            2**i for i in range(8, math.floor(math.log(max_batch_size, 2)))
-        ]
+        # Add powers of 2 up to max_batch_size (inclusive)
+        batch_sizes += [
+            2**i for i in range(8, math.floor(math.log2(max_batch_size)) + 1)
+        ]
🧹 Nitpick comments (2)
tensorrt_llm/llmapi/llm_args.py (2)

2233-2237: Nit: clarify comment wording and avoid magic number

  • “base_llm_args.max_batch_size” can confuse readers; the field here is self.max_batch_size (runtime).
  • Consider extracting 128 into a named constant to avoid magic numbers and centralize defaults.

Apply this diff to tighten the comment:

-            # Use the max batch size from:
-            #   1. cuda_graph_config.max_batch_size, if provided,
-            #   2. base_llm_args.max_batch_size, if provided,
-            #   3. default value 128.
+            # Resolve max batch size from (in order of precedence):
+            #   1. cuda_graph_config.max_batch_size (if provided),
+            #   2. self.max_batch_size (runtime, if provided),
+            #   3. default value 128.

Optionally, define a constant once and use it here:

# Near the top-level of this module (e.g., after imports)
DEFAULT_CUDA_GRAPH_MAX_BATCH_SIZE = 128

And then:

max_batch_size = (
    config.max_batch_size
    or self.max_batch_size
    or DEFAULT_CUDA_GRAPH_MAX_BATCH_SIZE
)

2236-2236: Fix Ruff E501 (line too long) by wrapping the expression

Ruff flagged this line as exceeding 120 chars. Break it across multiple lines using parentheses or intermediate variables so it passes lint without altering behavior.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d6c2a6a and cc3beac.

📒 Files selected for processing (1)
  • tensorrt_llm/llmapi/llm_args.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/llmapi/llm_args.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/llmapi/llm_args.py
🪛 Ruff (0.12.2)
tensorrt_llm/llmapi/llm_args.py

2236-2236: Line too long (124 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/llmapi/llm_args.py (1)

2233-2242: Precedence fix restores expected behavior and propagates resolved value

Good call using the fallback chain (config.max_batch_size -> self.max_batch_size -> 128) and writing the resolved value back to config.max_batch_size. This mirrors pre-1.0 defaulting and ensures downstream consumers (e.g., get_pytorch_backend_config) see a consistent, non-zero max_batch_size and a generated batch size list.

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 19, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15757 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15757 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #212 completed with status: 'FAILURE'

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15863 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15863 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #226 completed with status: 'FAILURE'

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 20, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (2)

2018-2018: Confirm KV cache memory fraction (0.8) won’t cause warmup/CG OOMs

Bumping free_gpu_memory_fraction to 0.8 reduces headroom for non-KV allocations and larger CUDA graph captures (now defaulting to runtime max_batch_size). Most tests here use 0.6–0.75. Please sanity-check this on smallest target GPUs that run this suite.

If you want to align with existing tests, here’s a minimal change:

-        kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.8)
+        kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.75)

2024-2026: Add a regression assertion to lock in CUDA Graph fallback precedence

Given the PR restores “use runtime max_batch_size when config.max_batch_size is None”, add a quick assertion to prevent regressions in future refactors.

Apply this diff inside the with-block before running tasks:

         with LLM(f"{llm_models_root()}/Qwen3/Qwen3-8B",
                  tensor_parallel_size=tp_size,
                  pipeline_parallel_size=pp_size,
                  moe_expert_parallel_size=ep_size,
                  kv_cache_config=kv_cache_config,
                  **pytorch_config,
                  enable_attention_dp=attention_dp) as llm:
+            # Validate: when CudaGraphConfig.max_batch_size is unspecified,
+            # it should default to the runtime (llm.args) max_batch_size.
+            assert llm.args.cuda_graph_config is not None
+            assert llm.args.cuda_graph_config.max_batch_size == llm.args.max_batch_size
             task = CnnDailymail(self.MODEL_NAME)
             task.evaluate(llm)
             task = MMLU(self.MODEL_NAME)
             task.evaluate(llm)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between cc3beac and bd35a62.

📒 Files selected for processing (1)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.859Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15911 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15911 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #235 completed with status: 'FAILURE'

@ruodil
Copy link
Collaborator

ruodil commented Aug 21, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15974 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15974 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #244 completed with status: 'FAILURE'

@jiaganc jiaganc force-pushed the dev-jiaganc-fix-cuda-graph-max-batchsize branch from bd35a62 to 93d792d Compare August 21, 2025 03:54
@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 21, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15989 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15989 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #246 completed with status: 'FAILURE'

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 21, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16028 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16028 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #249 completed with status: 'FAILURE'

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 21, 2025

The extra graphs cause lots of OOM in CI tests. Maybe we should usemax_batch_size as default value only in trtllm-bench.

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 21, 2025

/bot run

@jiaganc jiaganc changed the title [https://nvbugs/5451342][fix] Use runtime max_batch_size when cuda_graph_config.max_batch_size is not provided [https://nvbugs/5451342][fix] Use runtime max_batch_size when cuda_graph_config.max_batch_size is not provided in trtllm-bench Aug 21, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #16051 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tensorrt_llm/bench/dataclasses/configuration.py (3)

83-86: Python 3.8 compatibility: replace dict union operator.

The codebase targets Python 3.8+, but the dict union operator “|” requires Python 3.9+. Use dict unpacking to remain compatible.

Apply this diff:

-        llm_args["kv_cache_config"] = backend_cache_config | kv_cache_config
+        llm_args["kv_cache_config"] = {**backend_cache_config, **kv_cache_config}

65-67: Potential AttributeError when decoding_config is None.

RuntimeConfig.decoding_config is Optional, but get_llm_args unconditionally calls self.decoding_config.get_decoding_config(). If decoding_config is None, this will crash.

Apply this diff to avoid the crash:

-            "decoding_config":
-            self.decoding_config.get_decoding_config(),
+            "decoding_config":
+            (self.decoding_config.get_decoding_config()
+             if self.decoding_config is not None else None),

1-1: Add NVIDIA copyright header.

All source files must prepend the current-year NVIDIA copyright header.

Apply this diff:

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
🧹 Nitpick comments (2)
tensorrt_llm/bench/dataclasses/configuration.py (2)

120-125: Consider capping graph cache or exposing a bench-only cap to mitigate CI OOMs.

Bench defaults set cuda_graph_cache_size to 1000, and combined with large inferred max_batch_size this can blow up memory in CI. A modest bench-only cap (e.g., 128–256) or an env-tunable ceiling could help.

If you want, I can wire an env var like TLLM_BENCH_CUDA_GRAPH_DEFAULT_MAX_CAP to clamp inferred max_batch_size=min(runtime_max, cap).


95-101: Clarify empty batch_sizes as unset in bench configuration

To align with the LLM-API’s handling—where an empty list ([]) is treated the same as “not provided”—update the check in tensorrt_llm/bench/dataclasses/configuration.py (around lines 95–101) to use truthiness rather than is not None. For example:

             # Use runtime max_batch_size if neither field is supplied.
-            batch_sizes_set = cuda_graph_config.get("batch_sizes", None) is not None
-            max_batch_size_set = cuda_graph_config.get("max_batch_size", None) is not None
-            if not batch_sizes_set and not max_batch_size_set:
+            batch_sizes = cuda_graph_config.get("batch_sizes")
+            max_batch_size = cuda_graph_config.get("max_batch_size")
+            if not batch_sizes and not max_batch_size:
                 cuda_graph_config[
                     "max_batch_size"] = self.settings_config.max_batch_size

• This treats both None and [] as “unset,” ensuring that an explicit empty list falls through to the default-injection branch.
• No existing unit tests cover the empty-list case in the bench code—consider adding one to prevent regressions.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between bd35a62 and 6c7142f.

📒 Files selected for processing (1)
  • tensorrt_llm/bench/dataclasses/configuration.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/bench/dataclasses/configuration.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/bench/dataclasses/configuration.py
🔇 Additional comments (1)
tensorrt_llm/bench/dataclasses/configuration.py (1)

87-89: Return-path looks good after extra options merge.

The flow of updating llm_args with extra_llm_api_options, post-processing, then returning updated_llm_args is correct and aligns with the PR objective.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16051 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #251 completed with status: 'FAILURE'

@jiaganc
Copy link
Collaborator Author

jiaganc commented Aug 26, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16505 [ run ] triggered by Bot

@jiaganc jiaganc enabled auto-merge (squash) August 26, 2025 05:28
@tensorrt-cicd
Copy link
Collaborator

PR_Github #16505 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #308 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@jiaganc jiaganc merged commit 85b4ae2 into NVIDIA:release/1.0 Aug 26, 2025
7 checks passed
@jiaganc jiaganc deleted the dev-jiaganc-fix-cuda-graph-max-batchsize branch August 26, 2025 14:33
yuanjingx87 pushed a commit that referenced this pull request Aug 28, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 4, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 7, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 8, 2025
…aph_config.max_batch_size is not provided in trtllm-bench (NVIDIA#7031)

Signed-off-by: Jiagan Cheng <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants