Skip to content

Conversation

venkywonka
Copy link
Collaborator

@venkywonka venkywonka commented Aug 25, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Corrected MLP hidden size computation for DeciLM models under tensor parallelism, preventing misconfiguration and initialization errors.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@venkywonka venkywonka self-assigned this Aug 25, 2025
Copy link
Contributor

coderabbitai bot commented Aug 25, 2025

📝 Walkthrough

Walkthrough

Adjusted mlp_hidden_size computation in get_bindings_model_config for the "DeciLMForCausalLM" architecture: now divides the inferred FFN multiplier by tensor-parallel size. No other control flow or public API changes.

Changes

Cohort / File(s) Summary
DeciLM MLP size computation
tensorrt_llm/_torch/model_config.py
Updated mlp_hidden_size for single-architecture "DeciLMForCausalLM" to use self._infer_nemotron_ffn_mult() // self.mapping.tp_size instead of the previous undivided value.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Suggested reviewers

  • tijyojwad
  • Naveassaf
  • shaharmor98
  • yilin-void

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 163e57e and 147a01d.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/model_config.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tensorrt_llm/_torch/model_config.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@coderabbitai coderabbitai bot changed the title [nvbugs/5463720] @coderabbitai title [nvbugs/5463720] feat: Nemotron NAS detection, MLP hidden-size inference, LoRA updates Aug 25, 2025
@venkywonka
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tensorrt_llm/lora_manager.py (1)

256-266: Dataclass field order bug: non-default field follows a default field (will raise at import time)

mlp_hidden_size: int is declared after a field with a default (swap_gate_up_proj_lora_b_weight). Dataclasses require all non-default fields to precede any fields with defaults. This will raise TypeError: non-default argument 'mlp_hidden_size' follows default argument.

Move mlp_hidden_size before defaulted fields or give it a default (e.g., 0). Example fix:

 @dataclass
 class LoraModelConfig:
     lora_target_modules: list[str]
     trtllm_modules_to_hf_modules: dict[str, str]
     hidden_size: int
     dtype: str
-    swap_gate_up_proj_lora_b_weight: bool = True
-    mlp_hidden_size: int
+    mlp_hidden_size: int  # per-rank value; see callers
+    swap_gate_up_proj_lora_b_weight: bool = True
     # True if the model has a variable FFN architecture (e.g., Nemotron-NAS)
     is_variable_ffn: bool = False
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

1-1: Add missing NVIDIA copyright header

The file tensorrt_llm/_torch/pyexecutor/model_engine.py is missing the required NVIDIA license header (per CODING_GUIDELINES.md). All production source files (*.py, *.cpp, *.cu, etc.) must begin with the current‐year NVIDIA copyright notice.

• File requiring update:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py

Suggested diff at the very top of the file:

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+
+#     http://www.apache.org/licenses/LICENSE-2.0
+
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import bisect

Please prepend the exact header text as specified in CODING_GUIDELINES.md.

🧹 Nitpick comments (3)
tensorrt_llm/_torch/pyexecutor/config_utils.py (1)

16-21: Make Nemotron-NAS detection more robust and add a brief docstring

The current check requires architectures == ["DeciLMForCausalLM"], which is brittle (e.g., configs that keep the primary class name but alter list contents/order would fail). Recommend accepting any list containing the known class name and documenting the predicate.

Also, per repository guidelines, prepend the NVIDIA copyright header (2025) to this source file.

+def is_nemotron_nas(config) -> bool:
-    if hasattr(config, "architectures") and config.architectures is not None:
-        architectures = config.architectures
-        return len(
-            architectures) == 1 and architectures[0] == "DeciLMForCausalLM"
-    return False
+    """Return True if the config corresponds to a DeciLM-based Nemotron-NAS model."""
+    arch = getattr(config, "architectures", None)
+    if not arch:
+        return False
+    # Be tolerant to lists with >1 element and preserve exact match on the class name.
+    return any(a == "DeciLMForCausalLM" for a in arch)
tensorrt_llm/_torch/model_config.py (1)

345-347: Wrap long error message to satisfy E501 and improve readability

The line exceeds 120 chars. Wrap and keep the core detail.

-            raise ValueError(
-                f"Inferring mlp hidden size for model architecture: {self.pretrained_config.architectures} isn't supported yet"
-            )
+            raise ValueError(
+                "Inferring mlp hidden size is not supported for architectures: "
+                f"{self.pretrained_config.architectures}"
+            )
tests/integration/defs/examples/test_nemotron_nas.py (1)

178-181: Optional: broadcast a single LoRARequest instead of duplicating it

If generate() supports broadcasting a single LoRARequest across the batch, you can pass lora_request=lora_request instead of a list. If not, current approach is fine.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d010b20 and ba64089.

📒 Files selected for processing (7)
  • tensorrt_llm/_torch/model_config.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/_util.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/config_utils.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py (2 hunks)
  • tensorrt_llm/lora_manager.py (3 hunks)
  • tests/integration/defs/examples/test_nemotron_nas.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures

Files:

  • tensorrt_llm/_torch/pyexecutor/config_utils.py
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
  • tensorrt_llm/_torch/model_config.py
  • tests/integration/defs/examples/test_nemotron_nas.py
  • tensorrt_llm/lora_manager.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tensorrt_llm/_torch/pyexecutor/config_utils.py
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
  • tensorrt_llm/_torch/model_config.py
  • tests/integration/defs/examples/test_nemotron_nas.py
  • tensorrt_llm/lora_manager.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/examples/test_nemotron_nas.py
🧬 Code graph analysis (5)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (2)
tensorrt_llm/lora_manager.py (1)
  • LoraModelConfig (257-265)
tensorrt_llm/_utils.py (1)
  • binding_to_str_dtype (198-201)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
tensorrt_llm/_torch/pyexecutor/config_utils.py (1)
  • is_nemotron_nas (16-21)
tensorrt_llm/lora_manager.py (1)
  • LoraModelConfig (257-265)
tensorrt_llm/_torch/pyexecutor/_util.py (2)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
  • set_lora_model_config (441-456)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (1)
  • PeftCacheManager (1170-1284)
tensorrt_llm/_torch/model_config.py (1)
tensorrt_llm/_torch/pyexecutor/config_utils.py (2)
  • is_nemotron_hybrid (1-4)
  • is_nemotron_nas (16-21)
tests/integration/defs/examples/test_nemotron_nas.py (4)
tests/integration/defs/conftest.py (4)
  • get_device_memory (1946-1977)
  • get_sm_version (1857-1860)
  • llm_models_root (77-83)
  • nemotron_nas_model_root (1266-1277)
tensorrt_llm/executor/request.py (1)
  • LoRARequest (24-53)
tensorrt_llm/lora_manager.py (1)
  • LoraConfig (236-253)
tensorrt_llm/sampling_params.py (1)
  • SamplingParams (125-483)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/model_config.py

346-346: Line too long (127 > 120)

(E501)

tests/integration/defs/examples/test_nemotron_nas.py

149-149: Line too long (125 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (6)
tensorrt_llm/lora_manager.py (1)

1185-1190: Config layout change (adds is_dora flag): ensure C++ side supports both Nemo (3-int) and HF (4-int) rows

HF path now pushes 4-int entries [module_id, layer_idx, rank, is_dora] while NeMo path still pushes 3 ints [module_id, layer_idx, rank]. Please confirm PeftCacheManager and downstream kernels accept both shapes or have a versioned interpretation. If not uniform, NeMo should also populate a 4th column (0) for consistency.

Would you like me to scan the repo for the C++ reader to confirm the expected row width and propose updates?

tensorrt_llm/_torch/pyexecutor/_util.py (1)

510-516: Confirm PEFT config row schema alignment with C++

With the earlier HF config change to include is_dora, verify that peft_cache_manager.impl expects the same row width in lora_config. If not, we should also append a 4th column in NeMo path or strip the flag for HF until C++ is updated.

Would you like me to add a defensive adapter (pad/truncate config rows) before unsqueeze(0) to keep shapes uniform across sources?

tests/integration/defs/examples/test_nemotron_nas.py (3)

182-187: LoRARequest may raise on missing path; guard before constructing

LoRARequest.post_init validates lora_path exists and will raise ValueError. With the proposed guard above, this is safe; without it, the test can error out instead of being skipped.

If you keep any fallback path logic, ensure the guard executes before constructing LoRARequest.


146-146: Docstring is helpful—nice touch

Short, descriptive docstring aligns with our test style.


4-4: Good: integrates ci_profiler for timing

Consistent with other integration tests and aids perf triage.

tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

56-56: Importing is_nemotron_nas where used is fine

Brings the NAS predicate close to its use; matches existing is_mla import.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16483 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16483 [ run ] completed with state FAILURE
/LLM/release-1.0/L0_MergeRequest_PR pipeline #304 completed with status: 'FAILURE'

Copy link
Collaborator

@shaharmor98 shaharmor98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!!
Left some comments

@venkywonka venkywonka marked this pull request as ready for review August 26, 2025 21:42
@venkywonka venkywonka requested a review from a team as a code owner August 26, 2025 21:42
@venkywonka venkywonka changed the title [nvbugs/5463720] feat: Nemotron NAS detection, MLP hidden-size inference, LoRA updates [nvbugs/5463720][fix] Nemotron NAS detection, MLP hidden-size inference, LoRA updates Aug 26, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

441-468: API safety: return the config and avoid positional-arg breakage; make mlp_hidden_size keyword-only

  • The method is annotated (by usage) to construct a LoraModelConfig but does not return it.
  • Introducing mlp_hidden_size as a positional parameter can silently shift existing positional call sites, misassigning the boolean to mlp_hidden_size.

Apply this diff to return the config and make mlp_hidden_size keyword-only with a safe contract:

-    def set_lora_model_config(
-        self,
-        lora_target_modules: list[str],
-        trtllm_modules_to_hf_modules: dict[str, str],
-        mlp_hidden_size: int,
-        swap_gate_up_proj_lora_b_weight: bool = True,
-    ):
+    def set_lora_model_config(
+        self,
+        lora_target_modules: list[str],
+        trtllm_modules_to_hf_modules: dict[str, str],
+        swap_gate_up_proj_lora_b_weight: bool = True,
+        *,
+        mlp_hidden_size: Optional[int] = None,
+    ) -> LoraModelConfig:
@@
-        """
+        """
         Create LoraModelConfig for the current model.
@@
-            mlp_hidden_size: The TP-split intermediate size. For Nemotron-NAS, this is the upper-bound intermediate size (of the largest FFN layer)
+            mlp_hidden_size: TP-split intermediate size. For variable-FFN models (e.g., Nemotron-NAS),
+                this must be the per-TP upper bound intermediate size (largest FFN layer).
             swap_gate_up_proj_lora_b_weight: Swap behavior for gate/up-proj B weights.
         """
-        # TODO: Update this to detect more models with variable FFN architecture in the future.
-        # Currently, only Nemotron-NAS has variable FFN architecture.
-        is_var_ffn = is_nemotron_nas(
-            self.model.config)  # self.model.config is the HF pretrained config
+        # TODO: generalize variable-FFN detection beyond Nemotron-NAS when more models are added.
+        is_var_ffn = is_nemotron_nas(self.model.config)  # HF pretrained config
+        if mlp_hidden_size is None and is_var_ffn:
+            raise ValueError("mlp_hidden_size is required for variable-FFN models (per-TP value).")
@@
         self.lora_model_config = LoraModelConfig(
             lora_target_modules=lora_target_modules,
             trtllm_modules_to_hf_modules=trtllm_modules_to_hf_modules,
-            hidden_size=self.model.config.hidden_size,  # global, unsplit
+            hidden_size=self.model.config.hidden_size,  # global, unsplit
             dtype=torch_dtype_to_str(self.model.config.torch_dtype),
-            mlp_hidden_size=mlp_hidden_size * self.mapping.tp_size,  # global
+            mlp_hidden_size=mlp_hidden_size if mlp_hidden_size is not None else 0,  # per-TP
             swap_gate_up_proj_lora_b_weight=swap_gate_up_proj_lora_b_weight,
             is_variable_ffn=is_var_ffn)
+        return self.lora_model_config

Follow-ups:

  • Update call sites to use explicit keywords (see verification script below).
  • Keep mlp_hidden_size per-TP; do not multiply by tp_size. See next comment for the bug fix.
🧹 Nitpick comments (9)
tensorrt_llm/_torch/model_config.py (1)

1-1: Add mandatory NVIDIA copyright header

Per repository guidelines, prepend the current-year NVIDIA copyright header to all source files.

tensorrt_llm/_torch/pyexecutor/model_engine.py (3)

1-1: Add mandatory NVIDIA copyright header

Per repository guidelines, prepend the current-year NVIDIA copyright header to all source files.


450-455: Wrap long docstring line to satisfy Ruff E501 (120 cols)

Line length exceeds 120 chars. Break it across lines.

Apply this diff:

-            mlp_hidden_size: The TP-split intermediate size. For Nemotron-NAS, this is the upper-bound intermediate size (of the largest FFN layer)
+            mlp_hidden_size: TP-split intermediate size. For Nemotron-NAS, this is the
+                upper-bound intermediate size (of the largest FFN layer).

458-459: Generalize variable-FFN detection (future-proofing)

Right now, variability is keyed strictly to Nemotron-NAS. Consider introducing a generic has_variable_ffn(config) in config_utils and using that here to de-couple architecture from behavior.

I can draft a small helper and wire it up across the call sites if you want.

tensorrt_llm/lora_manager.py (5)

1-1: Add mandatory NVIDIA copyright header

Per repository guidelines, prepend the current-year NVIDIA copyright header to all source files.


262-266: LoraModelConfig: clarify semantics and consider a safe default

  • mlp_hidden_size is now required; ensure all instantiations pass it.
  • Semantics: it must be per-TP for variable-FFN packaging.

Apply this minor change to add a safe default and a clarifying comment:

-    mlp_hidden_size: int
+    mlp_hidden_size: int = 0  # per-TP MLP hidden size (upper bound for variable-FFN); 0 if not used

Optionally add a short class docstring noting per-TP expectation.


170-203: iterate_hf_lora: error message UX

Minor: The KeyError for “unsupported key … from HF LoRA weights” is helpful. Consider including the known valid pattern in the message to aid debugging.


943-1211: Edge-case: empty module selection can break max() padding

If no modules are ultimately appended to _cpp_lora_weights[uid], max() will raise ValueError. Not new, but more likely with stricter filtering. Guard before computing max_weight_size; if empty, raise a clear error for that adapter UID.


256-266: Type hints present; consider docstring for LoraModelConfig fields

Add a short docstring to the dataclass to document per-TP vs global semantics for hidden_size and mlp_hidden_size to prevent misuse across TP paths.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ba64089 and 163e57e.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/model_config.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/_util.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py (2 hunks)
  • tensorrt_llm/lora_manager.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures

Files:

  • tensorrt_llm/lora_manager.py
  • tensorrt_llm/_torch/model_config.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tensorrt_llm/lora_manager.py
  • tensorrt_llm/_torch/model_config.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: shaharmor98
PR: NVIDIA/TensorRT-LLM#7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.137Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.
📚 Learning: 2025-08-26T06:07:02.137Z
Learnt from: shaharmor98
PR: NVIDIA/TensorRT-LLM#7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.137Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.

Applied to files:

  • tensorrt_llm/lora_manager.py
  • tensorrt_llm/_torch/model_config.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/model_config.py
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/model_config.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/model_config.py (1)
tensorrt_llm/_torch/pyexecutor/config_utils.py (2)
  • is_nemotron_hybrid (1-4)
  • is_nemotron_nas (16-21)
tensorrt_llm/_torch/pyexecutor/model_engine.py (5)
tensorrt_llm/_torch/pyexecutor/config_utils.py (1)
  • is_nemotron_nas (16-21)
tensorrt_llm/_torch/models/checkpoints/base_weight_mapper.py (3)
  • model (162-165)
  • config (156-159)
  • mapping (152-153)
tensorrt_llm/_torch/models/modeling_utils.py (1)
  • config (494-495)
tensorrt_llm/lora_manager.py (1)
  • LoraModelConfig (257-265)
tensorrt_llm/_torch/distributed/communicator.py (1)
  • tp_size (46-47)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/model_engine.py

453-453: Line too long (147 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tensorrt_llm/_torch/model_config.py (1)

10-11: Importing is_nemotron_nas is appropriate here

Pulling the NAS predicate into model_config is consistent with usage below. No issues.

tensorrt_llm/_torch/pyexecutor/model_engine.py (2)

56-56: OK to import is_nemotron_nas here

Localizing the predicate improves readability where FFN variability is needed.


441-468: Call-site safety confirmed: The single call to set_lora_model_config in tensorrt_llm/_torch/pyexecutor/_util.py (line 504) already uses keyword arguments and does not multiply mlp_hidden_size by tp_size. No further updates are required.

tensorrt_llm/lora_manager.py (1)

1185-1189: Good refactor: centralize per-module flattening

Replacing ad-hoc concatenation with _flatten_lora_weights_per_module improves consistency and reduces duplication.

Signed-off-by: Venky Ganesh <[email protected]>
@venkywonka venkywonka force-pushed the user/venky/lora-for-nemotron-nas branch from d0f098d to 147a01d Compare August 26, 2025 22:33
@venkywonka venkywonka changed the title [nvbugs/5463720][fix] Nemotron NAS detection, MLP hidden-size inference, LoRA updates [nvbugs/5463720][fix] tp-split the inferred mlp_hidden_size for nemotron-nas Aug 26, 2025
@venkywonka venkywonka changed the title [nvbugs/5463720][fix] tp-split the inferred mlp_hidden_size for nemotron-nas [https://nvbugs/5463720][fix] tp-split the inferred mlp_hidden_size for nemotron-nas Aug 26, 2025
@venkywonka
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16603 [ run ] triggered by Bot

@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16648 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16603 [ run ] completed with state ABORTED

@MartinMarciniszyn MartinMarciniszyn enabled auto-merge (squash) August 27, 2025 07:39
@amitz-nv amitz-nv disabled auto-merge August 27, 2025 07:58
@tensorrt-cicd
Copy link
Collaborator

PR_Github #16648 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #321 completed with status: 'SUCCESS'

Copy link
Collaborator

@shaharmor98 shaharmor98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely lovely, looks much better :)
LGTM

@shaharmor98 shaharmor98 merged commit 6cc168a into NVIDIA:release/1.0 Aug 27, 2025
4 of 9 checks passed
yuanjingx87 pushed a commit that referenced this pull request Aug 28, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 4, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 7, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 8, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 8, 2025
… for nemotron-nas (NVIDIA#7231)

Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants