-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[None][fix] Refactoring to avoid circular import when importing torch models #6720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[None][fix] Refactoring to avoid circular import when importing torch models #6720
Conversation
Signed-off-by: Rakib Hasan <[email protected]>
📝 WalkthroughWalkthroughRefactors LoRA utilities by adding a new Changes
Sequence Diagram(s)sequenceDiagram
participant Model
participant lora_helper
participant lora_manager
Model->>lora_helper: import LoraConfig, use_lora
Model->>lora_helper: cfg = LoraConfig(...)
Model->>lora_helper: use_lora(model, cfg)
lora_helper->>lora_manager: call loader (load_torch_lora / load_nemo_lora)
lora_manager-->>lora_helper: loaded LoRA params
lora_helper-->>Model: LoRA applied to model
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15–20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (3)
tensorrt_llm/llmapi/build_cache.py (3)
1-3
: Add NVIDIA copyright header — mandatory for production source filesPer project guidelines, every
.py
file undertensorrt_llm/**
must start with the standard NVIDIA copyright / Apache-2.0 header.
Please prepend the header before any imports.+# Copyright (c) 2025, NVIDIA CORPORATION. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.
183-189
: Incorrect variable referenced in log message
dir
is undefined here (and shadows the builtin). The intended variable isfile_or_dir
.- logger.info(f"Removing invalid cache directory {dir}") + logger.info(f"Removing invalid cache directory %s", file_or_dir)
262-270
: Replace bareexcept
with explicit exception handlingCatching all exceptions masks unexpected errors and violates the coding guidelines. Limit the scope to JSON/IO parsing failures and log the reason for easier debugging.
- except: - pass + except (json.JSONDecodeError, FileNotFoundError, KeyError) as err: + logger.debug("Failed to validate cache metadata: %s", err)
♻️ Duplicate comments (4)
tensorrt_llm/models/phi/model.py (1)
23-23
: Same import-style issue as noted in GPT modelSee earlier comment – switch to
from ... import lora_helper
and reference symbols through the module namespace for consistency.tensorrt_llm/models/gemma/model.py (1)
31-31
: Same import-style issue as noted in GPT modelAdopt module-level import to keep namespace intact.
tensorrt_llm/models/grok/model.py (1)
21-21
: Same import-style issue as noted in GPT modelPlease switch to a module import (
from ... import lora_helper
).tensorrt_llm/models/llama/model.py (1)
28-28
: Same import-style issue as noted in GPT modelUse a module-level import to comply with project guidelines.
🧹 Nitpick comments (2)
tensorrt_llm/top_model_mixin.py (1)
1-1
: Consider updating copyright year.Since modifications are being made to this file, consider updating the copyright year to include 2024 to match other modified files in this PR.
-# SPDX-FileCopyrightText: Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-FileCopyrightText: Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.tensorrt_llm/models/gpt/model.py (1)
24-24
: Import breaks the “keep-namespace” guidelineCoding-guidelines mandate importing the module and keeping the namespace instead of pulling symbols directly.
Recommend switching to a module import (and adjusting usages) to avoid symbol-pollution and improve readability.-from ...lora_helper import LoraConfig, use_lora +from ... import lora_helper # access via lora_helper.LoraConfig / lora_helper.use_loraDown-stream references (e.g. Lines 416-417) would become
lora_helper.use_lora(self, …)
andlora_helper.LoraConfig
.
Same refactor applies to sibling model files touched in this PR.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
tensorrt_llm/__init__.py
(2 hunks)tensorrt_llm/_torch/pyexecutor/_util.py
(1 hunks)tensorrt_llm/llmapi/build_cache.py
(1 hunks)tensorrt_llm/llmapi/llm_args.py
(1 hunks)tensorrt_llm/lora_helper.py
(1 hunks)tensorrt_llm/lora_manager.py
(3 hunks)tensorrt_llm/models/enc_dec/model.py
(1 hunks)tensorrt_llm/models/gemma/model.py
(1 hunks)tensorrt_llm/models/gpt/model.py
(1 hunks)tensorrt_llm/models/grok/model.py
(1 hunks)tensorrt_llm/models/llama/model.py
(1 hunks)tensorrt_llm/models/mllama/model.py
(1 hunks)tensorrt_llm/models/phi/model.py
(1 hunks)tensorrt_llm/models/phi3/model.py
(1 hunks)tensorrt_llm/models/qwen/model.py
(1 hunks)tensorrt_llm/top_model_mixin.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tensorrt_llm/models/phi3/model.py
tensorrt_llm/llmapi/build_cache.py
tensorrt_llm/models/grok/model.py
tensorrt_llm/models/gemma/model.py
tensorrt_llm/models/llama/model.py
tensorrt_llm/models/mllama/model.py
tensorrt_llm/models/gpt/model.py
tensorrt_llm/top_model_mixin.py
tensorrt_llm/models/phi/model.py
tensorrt_llm/models/qwen/model.py
tensorrt_llm/llmapi/llm_args.py
tensorrt_llm/_torch/pyexecutor/_util.py
tensorrt_llm/__init__.py
tensorrt_llm/models/enc_dec/model.py
tensorrt_llm/lora_manager.py
tensorrt_llm/lora_helper.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tensorrt_llm/models/phi3/model.py
tensorrt_llm/llmapi/build_cache.py
tensorrt_llm/models/grok/model.py
tensorrt_llm/models/gemma/model.py
tensorrt_llm/models/llama/model.py
tensorrt_llm/models/mllama/model.py
tensorrt_llm/models/gpt/model.py
tensorrt_llm/top_model_mixin.py
tensorrt_llm/models/phi/model.py
tensorrt_llm/models/qwen/model.py
tensorrt_llm/llmapi/llm_args.py
tensorrt_llm/_torch/pyexecutor/_util.py
tensorrt_llm/__init__.py
tensorrt_llm/models/enc_dec/model.py
tensorrt_llm/lora_manager.py
tensorrt_llm/lora_helper.py
🧠 Learnings (5)
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tensorrt_llm/models/phi3/model.py
tensorrt_llm/llmapi/build_cache.py
tensorrt_llm/models/grok/model.py
tensorrt_llm/models/gemma/model.py
tensorrt_llm/models/llama/model.py
tensorrt_llm/models/mllama/model.py
tensorrt_llm/models/gpt/model.py
tensorrt_llm/top_model_mixin.py
tensorrt_llm/models/phi/model.py
tensorrt_llm/models/qwen/model.py
tensorrt_llm/llmapi/llm_args.py
tensorrt_llm/_torch/pyexecutor/_util.py
tensorrt_llm/__init__.py
tensorrt_llm/models/enc_dec/model.py
tensorrt_llm/lora_manager.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/llmapi/build_cache.py
tensorrt_llm/models/llama/model.py
tensorrt_llm/models/mllama/model.py
tensorrt_llm/models/qwen/model.py
tensorrt_llm/llmapi/llm_args.py
tensorrt_llm/_torch/pyexecutor/_util.py
tensorrt_llm/__init__.py
tensorrt_llm/models/enc_dec/model.py
tensorrt_llm/lora_manager.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/llmapi/build_cache.py
tensorrt_llm/models/mllama/model.py
tensorrt_llm/llmapi/llm_args.py
tensorrt_llm/_torch/pyexecutor/_util.py
tensorrt_llm/__init__.py
tensorrt_llm/models/enc_dec/model.py
tensorrt_llm/lora_manager.py
📚 Learning: 2025-07-17T09:01:27.402Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Applied to files:
tensorrt_llm/models/mllama/model.py
tensorrt_llm/top_model_mixin.py
tensorrt_llm/models/qwen/model.py
tensorrt_llm/_torch/pyexecutor/_util.py
tensorrt_llm/models/enc_dec/model.py
tensorrt_llm/lora_manager.py
tensorrt_llm/lora_helper.py
📚 Learning: 2025-07-22T09:22:14.726Z
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Applied to files:
tensorrt_llm/__init__.py
tensorrt_llm/lora_manager.py
🪛 Ruff (0.12.2)
tensorrt_llm/__init__.py
36-36: Module level import not at top of file
(E402)
🔇 Additional comments (11)
tensorrt_llm/llmapi/build_cache.py (1)
15-15
: Import path fix looks goodImporting
BuildConfig
directly fromtensorrt_llm.builder
decouplesbuild_cache.py
from the package root and avoids the circular-import chain that previously triggered whentensorrt_llm/__init__.py
eagerly imported models.
Nice catch. No further action needed.tensorrt_llm/__init__.py (1)
36-36
: Confirm import placement for circular dependency avoidance.The
import tensorrt_llm._torch.models as torch_models
at line 36 is intentionally deferred past_add_trt_llm_dll_directory()
and the xgrammar import, but_torch
submodules also pull in top-level symbols, which can introduce circular imports. Please verify that this placement is required to break the cycle and add an inline comment explaining why it must come here. You may also consider moving it to the very end of the file or using a lazy import to make the dependency order explicit.• File: tensorrt_llm/init.py, line 36
• Concern: potential circular import between__init__.py
and_torch.models
• Action: confirm necessity of this import location and document the rationale in codetensorrt_llm/lora_manager.py (2)
19-25
: Clean refactoring of LoRA utilities.The extraction of LoRA configuration and utilities to the dedicated
lora_helper
module is well-executed. The imports are properly organized and the delegation pattern is correctly implemented.
720-721
: Proper delegation to helper function.The static method now correctly delegates to the imported
get_missing_qkv_modules
function fromlora_helper
, maintaining the same interface while improving code organization.tensorrt_llm/lora_helper.py (1)
60-79
: Ignore circular dependency concern.The
use_lora
function only importslora_manager
at runtime, andlora_manager
’s module-level import oflora_helper
does not trigger a loop—imports resolve cleanly at load time. No additional comments or documentation are needed for this pattern.Likely an incorrect or invalid review comment.
tensorrt_llm/top_model_mixin.py (1)
18-18
: Import update consistent with refactoring.The import change from
.lora_manager
to.lora_helper
is correct and consistent with the module reorganization.tensorrt_llm/models/phi3/model.py (1)
11-11
: Import update properly implemented.The import change to use
lora_helper
forLoraConfig
anduse_lora
is correct and maintains the same functionality while supporting the refactoring to avoid circular imports.tensorrt_llm/models/mllama/model.py (1)
35-37
: Import migration to lora_helper looks correctImports align with the refactor; local usage of LoraConfig/use_lora remains consistent. No logic change.
tensorrt_llm/llmapi/llm_args.py (1)
22-24
: I can’t access that file directly without cloning; could you please confirm the import and__all__
section at the top of tensorrt_llm/lora_manager.py? Specifically:
- Whether you import from
.lora_helper
(relative) or from the fulltensorrt_llm.lora_helper
.- The exact contents of the
__all__ = [...]
list—does it includeuse_lora
alongsideLoraConfig
andget_default_trtllm_modules_to_hf_modules
?With that, I can verify the backward-compat shim fully covers all three symbols.
tensorrt_llm/models/enc_dec/model.py (1)
38-40
: LGTM: LoRA imports migrated to lora_helperConsistent with the refactor; EncoderModel/DecoderModel.use_lora still call use_lora correctly.
tensorrt_llm/models/qwen/model.py (1)
29-31
: LGTM: LoRA imports migrated to lora_helperMapping construction and use_lora integration remain intact; no functional changes.
Signed-off-by: Rakib Hasan <[email protected]>
/bot run |
/bot run |
PR_Github #14518 [ run ] triggered by Bot |
PR_Github #14518 [ run ] completed with state |
/bot run |
PR_Github #14560 [ run ] triggered by Bot |
Signed-off-by: Rakib Hasan <[email protected]>
/bot run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (6)
tensorrt_llm/lora_helper.py (6)
17-17
: Minor: add Any to typing imports (used by improved type hints).If you adopt the tightened type hints for use_lora, import Any.
-from typing import Dict, List, Optional +from typing import Any, Dict, List, Optional
43-45
: Add return type for get_default_trtllm_modules_to_hf_modules.Improves readability and static checking.
-def get_default_trtllm_modules_to_hf_modules(): +def get_default_trtllm_modules_to_hf_modules() -> Dict[str, str]:
24-29
: Polish docstring to Google style (Args/Returns) for public API.Keeps docs consistent and Sphinx-friendly.
- """Get missing QKV modules from LoRA target modules. - - In current design, q_lora_params, k_lora_params and v_lora_params should be all enabled or - all disabled at the same time. However, some lora checkpoints (e.g. BART) only contain two of them, - so we use zero tensor to fill the missing ones. - """ + """Get missing QKV modules from LoRA target modules. + + In the current design, q_lora_params, k_lora_params and v_lora_params should be all + enabled or all disabled at the same time. However, some LoRA checkpoints (e.g., BART) + only contain two of them, so we use zero tensors to fill the missing ones. + + Args: + lora_target_modules: LoRA-enabled module names present in the checkpoint. + + Returns: + A list of missing module names among {attn_q, attn_k, attn_v} and + {cross_attn_q, cross_attn_k, cross_attn_v}, if applicable. + ```
82-91
: Add a class-level docstring for LoraConfig (public API).Briefly documenting fields improves usability and aligns with guidelines.
@dataclass class LoraConfig(DictConversion): + """Configuration for loading LoRA adapters. + + Attributes: + lora_dir: One or more directories with LoRA checkpoints. + lora_ckpt_source: Source format of checkpoints ("hf" or "nemo"). + max_lora_rank: Maximum LoRA rank to load. + lora_target_modules: Module names to LoRA-ize (e.g., attn_q/attn_k/...). + trtllm_modules_to_hf_modules: Optional mapping from TRT-LLM to HF names. + max_loras: Maximum number of concurrently loaded LoRAs (GPU). + max_cpu_loras: Maximum number of adapters cached on CPU. + """
98-101
: Optional: add a docstring to missing_qkv_modules property.Clarifies computed value without reading implementation.
@property def missing_qkv_modules(self) -> List[str]: + """List of Q/K/V module names expected but absent in lora_target_modules.""" return get_missing_qkv_modules_from_lora_modules( self.lora_target_modules)
16-19
: Imports: prefer namespaced imports per repo guideline.Guidelines say “Always maintain the namespace when importing in Python.” Consider switching to namespaced imports in a follow-up to keep consistency (dataclasses, typing, and _utils). Not blocking, given this pattern is common in the repo.
Example:
import dataclasses import typing from . import _utils @dataclasses.dataclass class LoraConfig(_utils.DictConversion): lora_dir: typing.List[str] = dataclasses.field(default_factory=list) ...
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tensorrt_llm/disaggregated_params.py
(1 hunks)tensorrt_llm/lora_helper.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tensorrt_llm/disaggregated_params.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tensorrt_llm/lora_helper.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tensorrt_llm/lora_helper.py
🧠 Learnings (3)
📚 Learning: 2025-07-17T09:01:27.402Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Applied to files:
tensorrt_llm/lora_helper.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tensorrt_llm/lora_helper.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/lora_helper.py
🔇 Additional comments (2)
tensorrt_llm/lora_helper.py (2)
71-77
: LGTM on dynamic imports within use_lora — avoids circular imports.Importing loaders inside the function body is the right call to break cycles with lora_manager.
61-77
: Repo-level import audit passed, except residual torch_models export needs removalAll LoRA APIs have been centralized in lora_helper, and there are no improper imports or cycles between lora_helper and lora_manager. However, the root package still exposes torch_models, re-introducing a cycle:
• tensorrt_llm/init.py line 36:
import tensorrt_llm._torch.models as torch_models
• tensorrt_llm/init.py line 86:
'torch_models',
Please remove (or relocate) these entries to break the import cycle.
⛔ Skipped due to learnings
Learnt from: galagam PR: NVIDIA/TensorRT-LLM#6487 File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12 Timestamp: 2025-08-06T13:58:07.506Z Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Learnt from: yibinl-nvidia PR: NVIDIA/TensorRT-LLM#6506 File: examples/models/core/mixtral/requirements.txt:3-3 Timestamp: 2025-08-01T15:14:45.673Z Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Learnt from: amitz-nv PR: NVIDIA/TensorRT-LLM#5616 File: tensorrt_llm/executor/worker.py:375-384 Timestamp: 2025-07-17T09:01:27.402Z Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Learnt from: moraxu PR: NVIDIA/TensorRT-LLM#6303 File: tests/integration/test_lists/qa/examples_test_list.txt:494-494 Timestamp: 2025-07-28T17:06:08.621Z Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
PR_Github #14650 [ run ] triggered by Bot |
PR_Github #14650 [ run ] completed with state |
Signed-off-by: Rakib Hasan <[email protected]>
/bot run |
PR_Github #14667 [ run ] triggered by Bot |
Signed-off-by: rakib-hasan <[email protected]>
/bot run |
PR_Github #14668 [ run ] triggered by Bot |
PR_Github #14667 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tensorrt_llm/_torch/modules/fused_moe/quantization.py
(1 hunks)tensorrt_llm/executor/worker.py
(1 hunks)tensorrt_llm/llmapi/llm_args.py
(1 hunks)tests/unittest/llmapi/test_llm_pytorch.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tensorrt_llm/llmapi/llm_args.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/unittest/llmapi/test_llm_pytorch.py
- tensorrt_llm/executor/worker.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py
: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tensorrt_llm/_torch/modules/fused_moe/quantization.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
tensorrt_llm/_torch/modules/fused_moe/quantization.py
🧠 Learnings (3)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/quantization.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/quantization.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/quantization.py
🔇 Additional comments (1)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (1)
1-1
: Missing NVIDIA copyright header.Production source files must include the NVIDIA copyright header with the current year.
If this isn’t auto-inserted by tooling, prepend the standard header. Example:
# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License");Confirm whether the repo enforces this via lint/checks; if not, please add it.
PR_Github #14668 [ run ] completed with state |
Signed-off-by: Rakib Hasan <[email protected]>
/bot reuse-pipeline |
PR_Github #14852 [ reuse-pipeline ] triggered by Bot |
PR_Github #14852 [ reuse-pipeline ] completed with state |
…ng torch models (NVIDIA#6720)" This reverts commit 7ab8112.
When importing torch models in root init.py , it causes some circular import. This PR is refactoring some code to avoid such circular imports.
Summary by CodeRabbit
New Features
Refactor
Chores