Skip to content

Conversation

Shunkangz
Copy link
Collaborator

@Shunkangz Shunkangz commented Aug 15, 2025

Summary by CodeRabbit

  • New Features

    • Per-request error reporting: RequestSpecificException with request_id and error_code, exposed to Python via new exceptions submodule.
  • Bug Fixes

    • Map network/transfer failures to request-scoped coded exceptions; centralized error routing; safer handling of KV-cache transfers and future results.
    • Added explicit request error state for transfer failures.
  • Tests

    • Unit tests validating Python exception attributes, chaining, equality and C++→Python translation.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Shunkangz Shunkangz requested review from a team as code owners August 15, 2025 05:10
Copy link
Contributor

coderabbitai bot commented Aug 15, 2025

📝 Walkthrough

Walkthrough

Adds a per-request exception type and error codes in tensorrt_llm::common, maps network/std exceptions to that type in transfer code, exposes enum/exception and translators to Python (pybind + nanobind), routes Python-side KV-cache transfer errors to a centralized handler, and adds unit tests and build wiring.

Changes

Cohort / File(s) Summary
Common exceptions API
cpp/include/tensorrt_llm/common/tllmException.h, cpp/tensorrt_llm/common/tllmException.cpp
Add RequestErrorCode enum, kUNKNOWN_REQUEST_ID, RequestSpecificException class (ctor/dtor/getters) and TLLM_REQUEST_EXCEPTION(...) macro; include fixed-width integer headers and use PRIu64 for formatting.
Batch manager transceiver
cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp, cpp/tensorrt_llm/batch_manager/dataTransceiver.h
Include exceptions header; wrap TransferSession::send/recv calls with try/catch to convert std::exception to RequestSpecificException(..., kNETWORK_ERROR); catch RequestSpecificException earlier in requester/responder paths, log and translate to TLLM_REQUEST_EXCEPTION(...), then propagate via promises.
Cache transceiver (futures handling)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
Surround future.get() with try/catch in checkContextTransferStatus and checkGenTransferStatus; on exception log and set request state to DISAGG_TRANS_ERROR; on success continue existing state updates and bandwidth update only on success.
Executor UCX connection
cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp
Include tllmException.h and add local aliases for RequestSpecificException and RequestErrorCode; no public signature changes.
LlmRequest state
cpp/include/tensorrt_llm/batch_manager/llmRequest.h
Add kDISAGG_TRANS_ERROR = -1 to LlmRequestState enum to represent KV-cache transmission errors.
Python pybind bindings
cpp/tensorrt_llm/pybind/CMakeLists.txt, cpp/tensorrt_llm/pybind/bindings.cpp, cpp/tensorrt_llm/pybind/common/tllmExceptions.h, cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp
Add exceptions submodule and initExceptionsBindings; bind RequestErrorCode enum and RequestSpecificException; register translator to convert C++ RequestSpecificException into a Python exception instance and attach request_id/error_code; add new source to module SRCS.
Python nanobind bindings
cpp/tensorrt_llm/nanobind/CMakeLists.txt, cpp/tensorrt_llm/nanobind/bindings.cpp, cpp/tensorrt_llm/nanobind/common/tllmExceptions.h, cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
Add exceptions submodule and initExceptionsBindings; bind RequestErrorCode and RequestSpecificException; register translator to populate Python exception attributes and raise the bound Python exception; add source to nanobind SRCS.
PyExecutor (Python) error routing
tensorrt_llm/_torch/pyexecutor/py_executor.py
Add private wrappers _check_disagg_ctx_cache_transfer_status / _check_disagg_gen_cache_transfer_status that call transceiver status APIs and detect requests in DISAGG_TRANS_ERROR, routing them to _handle_errors; replace direct status calls with wrappers.
Tests
tests/unittest/others/test_exception.py
Add unit tests validating Python RequestSpecificException behavior: message, attributes (request_id, error_code), traceback, chaining and instance distinctions.
Build entries / wiring
cpp/tensorrt_llm/pybind/CMakeLists.txt, cpp/tensorrt_llm/nanobind/CMakeLists.txt
Add new exception-binding sources to respective module SRCS so bindings and translators are compiled into Python modules.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant PY as PyExecutor (Python)
  participant BND as Binding translator (pybind/nanobind)
  participant BM as DataRequester/Responder (C++)
  participant TS as TransferSession (send/recv)

  PY->>BND: call cache-transfer API
  BND->>BM: invoke requester/responder
  BM->>TS: perform send/recv
  TS-->>BM: throws std::exception (network)
  BM->>BM: convert to RequestSpecificException(request_id, NETWORK_ERROR)
  BM->>BND: propagate RequestSpecificException
  BND-->>PY: raise Python RequestSpecificException (request_id, error_code)
  PY-->PY: _handle_errors(error_msg, request_id)
Loading
sequenceDiagram
  autonumber
  participant PY as PyExecutor
  participant KT as kv_cache_transceiver

  PY->>PY: _check_disagg_*_cache_transfer_status()
  PY->>KT: check_*_transfer_status(atLeastNum)
  alt success
    KT-->>PY: status (ok)
  else RequestSpecificException
    KT-->>PY: RequestSpecificException (request_id, error_code)
    PY->>PY: _handle_errors(error_msg, request_id)
  else other Exception
    KT-->>PY: Exception
    PY->>PY: _handle_errors(error_msg)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Community want to contribute

Suggested reviewers

  • chuangz0
  • DomBrown
  • chzblych
  • nv-guomingz

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbit in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbit in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbit gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbit read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbit help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbit ignore or @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbit summary or @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbit or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Shunkangz Shunkangz removed the request for review from schetlur-nv August 15, 2025 05:17
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🔭 Outside diff range comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1158-1160: Fix broken _handle_errors call in _fetch_and_activate_new_requests

_handle_errors no longer accepts the requests= kwarg — that call will raise TypeError at runtime. Replace it to pass request_ids (the codebase uses request.request_id elsewhere).

  • File: tensorrt_llm/_torch/pyexecutor/py_executor.py — around line 1159

Apply:

-            except Exception as e:
-                self._handle_errors(str(e), requests=[request])
+            except Exception as e:
+                self._handle_errors(str(e), request_ids=[request.request_id])
                 return True
🧹 Nitpick comments (17)
cpp/tensorrt_llm/batch_manager/dataTransceiver.h (1)

154-162: Consistency: consider using NEW_TLLM_REQUEST_SPECIFIC_EXCEPTION_WITH_ERROR_CODE macro

For consistency with the rest of the codebase and to auto-capture file/line, prefer the macro wrapper over directly invoking the constructor. It also standardizes message formatting.

Also applies to: 167-175

cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp (1)

23-23: Remove unused include and aliases

tllmException.h and the RequestSpecificException/RequestErrorCode aliases are not used in this TU. Trim them to reduce compile overhead and avoid symbol pollution within the namespace.

Apply this diff:

-#include "tensorrt_llm/common/tllmException.h"
@@
-// Using declarations to shorten the code
-using RequestSpecificException = tensorrt_llm::common::RequestSpecificException;
-using RequestErrorCode = tensorrt_llm::common::RequestErrorCode;

Also applies to: 29-32

cpp/tensorrt_llm/pybind/common/tllmExceptions.h (2)

2-2: Update copyright year

Per coding guidelines, use the current year. Suggest updating 2022-2024 to 2025.

Apply this diff:

- * Copyright (c) 2022-2024, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

22-24: Remove unused namespace alias

The tc alias is unused in this header and leaks a global alias into every including TU. Drop it.

Apply this diff:

-namespace py = pybind11;
-namespace tc = tensorrt_llm::common;
+namespace py = pybind11;
tests/unittest/others/test_exception.py (3)

1-3: Add NVIDIA copyright header (current year)

All Python sources should include the NVIDIA header.

Apply this diff:

+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 import sys
 
-from tensorrt_llm.bindings.exceptions import RequestSpecificException
+from tensorrt_llm.bindings.exceptions import RequestSpecificException

16-23: Fix unreachable except block by ordering specific before base

Catching Exception first makes the RequestSpecificException except unreachable. Reverse the order to test both semantics.

Apply this diff:

 def test_exception_inheritance():
     """Test that exception properly inherits from base Exception."""
     try:
         raise RequestSpecificException("Test inheritance")
-    except Exception as e:  # Should catch base Exception
-        assert isinstance(e, RequestSpecificException)
-    except RequestSpecificException as e:  # Should also catch specific type
-        assert isinstance(e, RequestSpecificException)
+    except RequestSpecificException as e:  # Catch specific type
+        assert isinstance(e, RequestSpecificException)
+    except Exception as e:  # And also as base Exception
+        assert isinstance(e, Exception)

25-33: Optionally assert default attributes more strongly

If the binding sets defaults, consider asserting request_id and error_code are present and not None. Import RequestErrorCode if you plan to validate its type/value explicitly.

Apply this diff:

-from tensorrt_llm.bindings.exceptions import RequestSpecificException
+from tensorrt_llm.bindings.exceptions import RequestSpecificException, RequestErrorCode
@@
     except RequestSpecificException as e:
         # Check that attributes exist (they might be None if not set by C++)
         assert hasattr(e, 'request_id'), "request_id attribute missing"
         assert hasattr(e, 'error_code'), "error_code attribute missing"
+        # Optional stronger checks if translator sets defaults
+        # assert e.error_code in (None, RequestErrorCode.UNKNOWN_ERROR)
cpp/tensorrt_llm/common/tllmException.cpp (1)

1-3: Update copyright year.

Guidelines require current year. Please update 2022-2024 to 2022-2025.

cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp (1)

194-200: Consider releasing the session on failure to avoid leaks/stale state.

On exception, mSender->release(id) is not called. Verify whether release is required even on failure to prevent resource leaks or dangling counterpart state.

If release is safe in failure paths, add it guarded with try/catch:

         catch (tensorrt_llm::common::RequestSpecificException const& e)
         {
             TLLM_LOG_ERROR("Exception in sendAndRemoveResponse: %s ", e.what());
             auto new_exception
                 = NEW_TLLM_REQUEST_SPECIFIC_EXCEPTION_WITH_ERROR_CODE(id, e.getErrorCode(), "%s", e.what());
             resp.mPromise.set_exception(std::make_exception_ptr(new_exception));
+            try {
+                mSender->release(id);
+            } catch (std::exception const& ex) {
+                TLLM_LOG_WARNING("Release after failure also failed for request %zu: %s", static_cast<size_t>(id), ex.what());
+            }
         }
cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp (2)

33-41: Align Python exception’s fully-qualified name with its module.

You’re creating the type as "tensorrt_llm.RequestSpecificException" but adding it to the exceptions submodule. This leads to a mismatch between the qualified name and where it lives (tensorrt_llm.exceptions.RequestSpecificException).

Use the submodule’s name to build the exception’s FQN:

-    static PyObject* request_specific_exc
-        = PyErr_NewException("tensorrt_llm.RequestSpecificException", nullptr, nullptr);
+    const std::string exc_fqn = (std::string) py::str(m.attr("__name__")) + ".RequestSpecificException";
+    static PyObject* request_specific_exc = PyErr_NewException(exc_fqn.c_str(), nullptr, nullptr);

1-15: Update copyright year.

Please update to include 2025 to align with other files and guidelines.

cpp/include/tensorrt_llm/common/tllmException.h (2)

1-3: Update copyright year.

Bring the header to 2025 per repository guidelines.


17-24: Consider adding an include guard as per guidelines.

Guidelines prefer include guards named TRTLLM_TLLMEXCEPTION_H. The file currently uses #pragma once. If the codebase standardizes on guards, consider adding them.

Example:

#ifndef TRTLLM_TLLMEXCEPTION_H
#define TRTLLM_TLLMEXCEPTION_H
// ... contents ...
#endif // TRTLLM_TLLMEXCEPTION_H
tensorrt_llm/_torch/pyexecutor/py_executor.py (4)

1-1: Missing NVIDIA copyright header

Per coding guidelines, prepend the NVIDIA copyright header (current year) to all source files.

Apply at the top of the file:

+ # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.

890-895: Assertion on empty schedule in non-disaggregated mode: verify this cannot trigger under valid conditions

The new assert will hard-fail if scheduled_batch.batch_size == 0 without a transceiver. This may legitimately happen (e.g., temporary resource contention or attention-DP balancing/pausing edge cases). Consider gating the assert to avoid false positives or convert to a warning with telemetry.

Suggested change:

-        else:
-            assert scheduled_batch.batch_size > 0, (
-                "fail to schedule any pending request, "
-                "probably run out of resource.")
+        else:
+            # In non-disaggregated mode, assert we made forward progress;
+            # avoid false positives when attention DP is balancing.
+            if not self.enable_attention_dp:
+                assert scheduled_batch.batch_size > 0, (
+                    "fail to schedule any pending request, "
+                    "probably run out of resource."
+                )

1398-1417: Helper is solid; tighten annotations, adopt snake_case, and handle unknown request_id

  • Add a precise callable type annotation and return type.
  • Use snake_case for at_least_num to match Python conventions.
  • If an exception carries an unknown/untracked request_id, fall back to a global error (prevents stuck active requests).
-def _check_cache_transfer_status_helper(self,
-                                        method_name: str,
-                                        method_call,
-                                        atLeastNum: int = 0):
-    """Helper method to handle cache transfer status checking with error handling."""
-    try:
-        method_call(atLeastNum)
+def _check_cache_transfer_status_helper(self,
+                                        method_name: str,
+                                        method_call: "Callable[[int], None]",
+                                        at_least_num: int = 0) -> None:
+    """Handle KV cache transfer status checking with per-request error propagation."""
+    try:
+        method_call(at_least_num)
     except RequestSpecificException as e:
-        error_msg = str(e)
-        logger.error(
-            f"Encountered a request-specific error in {method_name}: {error_msg}"
-        )
-        request_ids = [e.request_id]
-        self._handle_errors(error_msg, request_ids)
+        error_msg = str(e)
+        logger.error(f"Encountered a request-specific error in {method_name}: {error_msg}")
+        # If the id is not known locally, treat as a global error to avoid a stuck state.
+        known_ids = {r.py_request_id for r in self.active_requests}
+        request_ids = [e.request_id] if getattr(e, "request_id", None) in known_ids else None
+        self._handle_errors(error_msg, request_ids)
     except Exception as e:
         error_msg = str(e)
         logger.error(
             f"Encountered a system error in {method_name}: {error_msg}")
         self._handle_errors(error_msg)

Note: using a quoted type for Callable avoids adding new imports; feel free to import Callable if preferred.


1418-1429: Nit: adopt snake_case for atLeastNum across wrappers

Aligns with Python naming conventions. Semantics unchanged.

-@nvtx_range("_check_disagg_ctx_cache_transfer_status")
-def _check_disagg_ctx_cache_transfer_status(self, atLeastNum: int = 0):
-    self._check_cache_transfer_status_helper(
-        "checking context transfer status",
-        self.kv_cache_transceiver.check_context_transfer_status, atLeastNum)
+@nvtx_range("_check_disagg_ctx_cache_transfer_status")
+def _check_disagg_ctx_cache_transfer_status(self, at_least_num: int = 0):
+    self._check_cache_transfer_status_helper(
+        "checking context transfer status",
+        self.kv_cache_transceiver.check_context_transfer_status, at_least_num)
 
-@nvtx_range("_check_disagg_gen_cache_transfer_status")
-def _check_disagg_gen_cache_transfer_status(self, atLeastNum: int = 0):
-    self._check_cache_transfer_status_helper(
-        "checking generation transfer status",
-        self.kv_cache_transceiver.check_gen_transfer_status, atLeastNum)
+@nvtx_range("_check_disagg_gen_cache_transfer_status")
+def _check_disagg_gen_cache_transfer_status(self, at_least_num: int = 0):
+    self._check_cache_transfer_status_helper(
+        "checking generation transfer status",
+        self.kv_cache_transceiver.check_gen_transfer_status, at_least_num)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8e25225 and aa6d2b6.

📒 Files selected for processing (12)
  • cpp/include/tensorrt_llm/common/tllmException.h (2 hunks)
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp (0 hunks)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp (3 hunks)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h (1 hunks)
  • cpp/tensorrt_llm/common/tllmException.cpp (1 hunks)
  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp (1 hunks)
  • cpp/tensorrt_llm/pybind/CMakeLists.txt (1 hunks)
  • cpp/tensorrt_llm/pybind/bindings.cpp (2 hunks)
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp (1 hunks)
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.h (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (8 hunks)
  • tests/unittest/others/test_exception.py (1 hunks)
💤 Files with no reviewable changes (1)
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/pybind/common/tllmExceptions.h
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/common/tllmException.cpp
  • cpp/include/tensorrt_llm/common/tllmException.h
  • cpp/tensorrt_llm/pybind/bindings.cpp
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/pybind/common/tllmExceptions.h
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/common/tllmException.cpp
  • cpp/include/tensorrt_llm/common/tllmException.h
  • cpp/tensorrt_llm/pybind/bindings.cpp
**/*.{h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Header files must use include guards named TRTLLM__H without underscores prefix/suffix (e.g., TRTLLM_FOO_BAR_HELLO_H)

Files:

  • cpp/tensorrt_llm/pybind/common/tllmExceptions.h
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/tensorrt_llm/pybind/common/tllmExceptions.h
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • tests/unittest/others/test_exception.py
  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/common/tllmException.cpp
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • cpp/include/tensorrt_llm/common/tllmException.h
  • cpp/tensorrt_llm/pybind/bindings.cpp
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/unittest/others/test_exception.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/common/tllmException.cpp
  • cpp/tensorrt_llm/pybind/bindings.cpp
🧠 Learnings (1)
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
PR: NVIDIA/TensorRT-LLM#6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
🔇 Additional comments (12)
cpp/tensorrt_llm/pybind/CMakeLists.txt (1)

17-17: LGTM: exceptions binding added to pybind SRCS

The addition of common/tllmExceptions.cpp to SRCS looks correct and integrates the new exceptions submodule into the build.

cpp/tensorrt_llm/pybind/bindings.cpp (3)

33-33: LGTM: exception bindings header inclusion is appropriate.

The new include is correctly placed alongside other pybind includes.


122-122: LGTM: dedicated exceptions submodule is a good separation.

Creating a dedicated "exceptions" submodule keeps the surface organized.


126-126: LGTM: exceptions bindings wired in correctly.

Initialization via tensorrt_llm::pybind::common::initExceptionsBindings(mExceptions) cleanly scopes the bindings.

cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp (1)

25-25: LGTM: include per-request exception types.

This inclusion is necessary for the new exception handling.

cpp/include/tensorrt_llm/common/tllmException.h (2)

35-47: LGTM: error code enum and unknown request ID.

Enum values and constant follow naming conventions and are scoped correctly.


66-81: LGTM: request-specific exception API.

Constructor, accessors, and member naming align with guidelines (mPrefix, UpperCamelCase types).

tensorrt_llm/_torch/pyexecutor/py_executor.py (5)

25-25: Good: importing RequestSpecificException for per-request error handling

This enables granular error propagation for KV-cache transfer errors.


690-691: Good: route context transfer checks through centralized wrapper

Switching to _check_disagg_ctx_cache_transfer_status(1) consolidates error handling and keeps KV-cache transfer exceptions localized.


1265-1266: Good: centralized generation transfer status check

Delegating to _check_disagg_gen_cache_transfer_status(at_least_num) keeps the transfer error handling uniform.


1368-1369: Good: consistent wrapper usage for gen transfer check

This aligns gen-transfer status checks with the new helper path.


1388-1389: Good: consistent wrapper usage for ctx transfer check

Same benefit as above; keeps error handling centralized.

Copy link
Collaborator

@Shixiaowei02 Shixiaowei02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the hard work. I have no further comments. I think the robot's checks above are correct. Please make sure to apply the changes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
cpp/tensorrt_llm/batch_manager/dataTransceiver.h (2)

154-162: Missing include for RequestSpecificException

The code uses common::RequestSpecificException and common::RequestErrorCode::kNETWORK_ERROR, but the header doesn't include the required tensorrt_llm/common/tllmException.h. Relying on transitive includes can lead to build failures.

Add the missing include near the other includes (outside the selected range):

#include "tensorrt_llm/common/tllmException.h"

167-175: Missing include for RequestSpecificException

Same issue as above - the required header is not included.

cpp/include/tensorrt_llm/common/tllmException.h (1)

29-31: Consider using a shorter macro name

As suggested by Shixiaowei02 in a previous review, the macro name NEW_TLLM_REQUEST_SPECIFIC_EXCEPTION_WITH_ERROR_CODE is quite long. Consider a more concise name for better readability.

-#define NEW_TLLM_REQUEST_SPECIFIC_EXCEPTION_WITH_ERROR_CODE(requestID, errorCode, ...)                                 \
+#define TLLM_REQUEST_EXCEPTION(requestID, errorCode, ...)                                                             \
     tensorrt_llm::common::RequestSpecificException(                                                                    \
         __FILE__, __LINE__, tensorrt_llm::common::fmtstr(__VA_ARGS__).c_str(), requestID, errorCode)

Note: If you apply this change, you'll need to update all usages of this macro throughout the codebase.

🧹 Nitpick comments (2)
cpp/tensorrt_llm/nanobind/common/tllmExceptions.h (1)

17-17: Consider using include guards instead of #pragma once

The coding guidelines specify that header files must use include guards named TRTLLM_<FILENAME>_H without underscores prefix/suffix. Replace #pragma once with traditional include guards.

-#pragma once
+#ifndef TRTLLM_TLLMEXCEPTIONS_H
+#define TRTLLM_TLLMEXCEPTIONS_H
+
 #include <nanobind/nanobind.h>

 namespace nb = nanobind;

 namespace tensorrt_llm::nanobind::common
 {

 void initExceptionsBindings(nb::module_& m);

 } // namespace tensorrt_llm::nanobind::common
+
+#endif // TRTLLM_TLLMEXCEPTIONS_H
cpp/include/tensorrt_llm/common/tllmException.h (1)

37-44: Consider more specific error code categories

The current error code categories are quite broad. Consider adding more specific error codes for better diagnostics and error handling granularity. For example:

  • Connection timeout vs. connection refused
  • Data corruption vs. protocol mismatch
  • Authentication failures

This would help users and developers better understand and handle specific error scenarios.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between aa6d2b6 and c42b0c8.

📒 Files selected for processing (6)
  • cpp/include/tensorrt_llm/common/tllmException.h (2 hunks)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h (1 hunks)
  • cpp/tensorrt_llm/common/tllmException.cpp (1 hunks)
  • cpp/tensorrt_llm/nanobind/bindings.cpp (2 hunks)
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp (1 hunks)
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/tensorrt_llm/common/tllmException.cpp
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/nanobind/bindings.cpp
  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/nanobind/bindings.cpp
  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Header files must use include guards named TRTLLM__H without underscores prefix/suffix (e.g., TRTLLM_FOO_BAR_HELLO_H)

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/nanobind/bindings.cpp
  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
  • cpp/tensorrt_llm/nanobind/bindings.cpp
🧠 Learnings (1)
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
PR: NVIDIA/TensorRT-LLM#6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
🔇 Additional comments (2)
cpp/tensorrt_llm/nanobind/bindings.cpp (1)

39-39: LGTM! Exception bindings properly integrated

The exception module is correctly created and initialized in the appropriate sequence after runtime initialization.

Also applies to: 130-130, 134-134

cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp (1)

36-37: No change needed — static exception object is safe

Function-local static initialization is thread-safe in C++11+, and this binding is created as part of module initialization (under Python import/GIL). The repository uses the same pattern elsewhere and tests exercise the exception translation, so I don't see a thread-safety issue requiring changes.

Files to note:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp — static nb::object request_specific_exc (line ~36)
  • cpp/tensorrt_llm/nanobind/batch_manager/bindings.cpp — static nb::object peft_exc / lora_exc (lines ~58–59)
  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp — PyErr_NewException usage (line ~35)
  • tests/unittest/others/test_exception.py — tests that exercise RequestSpecificException

No code diff required.

@Shunkangz
Copy link
Collaborator Author

/bot run

@Shunkangz Shunkangz force-pushed the request_specific_exception branch from ee1be11 to bd14b1d Compare August 15, 2025 07:57
@Shunkangz
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
cpp/include/tensorrt_llm/common/tllmException.h (1)

29-31: Provide a shorter alias for the macro (addresses prior feedback)

Echoing the earlier comment about macro length, consider adding a concise alias while keeping the current name for backward compatibility.

Apply this diff to add a short alias:

 #define TLLM_REQUEST_EXCEPTION(requestID, errorCode, ...)                                                              \
     tensorrt_llm::common::RequestSpecificException(                                                                    \
         __FILE__, __LINE__, tensorrt_llm::common::fmtstr(__VA_ARGS__).c_str(), requestID, errorCode)
+
+// Short alias to reduce verbosity at call sites, keeps the original for compatibility.
+#define TLLM_REQ_EXCEPTION(requestID, errorCode, ...)                                                                  \
+    TLLM_REQUEST_EXCEPTION(requestID, errorCode, __VA_ARGS__)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1539-1561: Fix: _handle_errors builds wrong structure and mutates active_requests during iteration

  • It constructs a dict but _enqueue_responses expects an iterable of (req_id, LlmResponse) tuples. This will raise “cannot unpack non-iterable int object.”
  • It removes from self.active_requests while iterating over it, causing skipped elements and inconsistent state.

This was raised previously and is still present. Also, the AI summary claims call sites were updated to use request_ids but _respond_if_invalid still passes requests=[request].

Apply this diff to fix structure and removal logic:

-    def _handle_errors(self,
-                       error_msg: Optional[str] = None,
-                       request_ids: Optional[List[int]] = None):
-        error_responses = {}
-        error_msg = error_msg or "error"
-        for request in self.active_requests:
-            if request_ids is not None and request.py_request_id not in request_ids:
-                continue
-            req_id = request.py_request_id
-            request.state = LlmRequestState.GENERATION_COMPLETE
-            self._terminate_request(request)
-            error_responses[req_id] = LlmResponse(
-                request_id=req_id,
-                error_msg=error_msg,
-                client_id=request.py_client_id)
-
-        if request_ids is not None:
-            req_id_set = set(request_ids)
-            for request in self.active_requests:
-                if request.py_request_id in req_id_set:
-                    self.active_requests.remove(request)
-        else:
-            self.active_requests.clear()
-        self._enqueue_responses(error_responses)
+    def _handle_errors(
+        self,
+        error_msg: Optional[str] = None,
+        request_ids: Optional[List[int]] = None,
+    ) -> None:
+        error_msg = error_msg or "error"
+        error_responses: List[Tuple[int, LlmResponse]] = []
+        matched_requests: List[LlmRequest] = []
+        for request in self.active_requests:
+            if request_ids is not None and request.py_request_id not in request_ids:
+                continue
+            matched_requests.append(request)
+            req_id = request.py_request_id
+            request.state = LlmRequestState.GENERATION_COMPLETE
+            self._terminate_request(request)
+            error_responses.append((
+                req_id,
+                LlmResponse(
+                    request_id=req_id,
+                    error_msg=error_msg,
+                    client_id=request.py_client_id,
+                ),
+            ))
+
+        if request_ids is not None:
+            req_id_set = set(request_ids)
+            self.active_requests = [
+                req for req in self.active_requests
+                if req.py_request_id not in req_id_set
+            ]
+        else:
+            self.active_requests.clear()
+        self._enqueue_responses(error_responses)

Additionally, update call sites still using the old parameter name. For example, in _respond_if_invalid:

# before
self._handle_errors(str(e), requests=[request])

# after
self._handle_errors(str(e), request_ids=[request.py_request_id])

I can send a targeted patch for other call sites if you prefer. Do you want me to sweep the repo for any remaining uses of the old “requests=” keyword and open a follow-up PR?

🧹 Nitpick comments (6)
cpp/include/tensorrt_llm/common/tllmException.h (3)

36-44: Enum design looks good; optional: consider signed underlying type

The reserved ranges and explicit underlying type are clear. If you want to strictly follow the “prefer signed integers” guideline, you could use int32_t as the underlying type. If the bindings or formatting pathways expect unsigned, keeping uint32_t is fine as-is.


67-83: Make members const and align identifier casing (Id vs ID); tighten pointer constness

Minor polish to improve const-correctness and naming consistency with lowerCamelCase:

  • Make mRequestID and mErrorCode const (they are immutable after construction).
  • Use requestId for parameter/member naming consistency with lowerCamelCase.
  • Use char const* const for the message pointer (pointer itself is not modified).
-class RequestSpecificException : public std::runtime_error
+class RequestSpecificException : public std::runtime_error
 {
 public:
     explicit RequestSpecificException(
-        std::string const& file, std::size_t line, char const* msg, uint64_t requestID, RequestErrorCode errorCode);
+        std::string const& file, std::size_t line, char const* const msg, uint64_t requestId, RequestErrorCode errorCode);
 
     ~RequestSpecificException() noexcept override;
 
     [[nodiscard]] uint64_t getRequestId() const noexcept;
 
     [[nodiscard]] RequestErrorCode getErrorCode() const noexcept;
 
 private:
-    uint64_t mRequestID;
-    RequestErrorCode mErrorCode;
+    uint64_t const mRequestId;
+    RequestErrorCode const mErrorCode;
 };

Note: Update the corresponding definitions in cpp/tensorrt_llm/common/tllmException.cpp and any call sites to use requestId/mRequestId.


67-83: Consider deriving from TllmException for stack-trace parity

If you want per-request exceptions to carry the same stack trace/debugging features as TllmException, consider inheriting from TllmException instead of std::runtime_error. This keeps a consistent exception hierarchy and tooling.

Illustrative header diff:

-class RequestSpecificException : public std::runtime_error
+class RequestSpecificException : public TllmException
 {
 public:
     explicit RequestSpecificException(
-        std::string const& file, std::size_t line, char const* msg, uint64_t requestID, RequestErrorCode errorCode);
+        std::string const& file, std::size_t line, char const* const msg, uint64_t requestId, RequestErrorCode errorCode);

The cpp ctor would then call TllmException(file.c_str(), line, composedMsg) as the base, while still storing requestId and errorCode. If minimizing overhead is a goal, keeping std::runtime_error is reasonable—this is an optional alignment decision.

tensorrt_llm/_torch/pyexecutor/py_executor.py (3)

11-11: Add missing typing imports (Callable, Any) and ensure file has NVIDIA copyright header

  • You’ll likely want Callable/Any for the helper’s type hints below.
  • Per coding guidelines, prepend the NVIDIA copyright header (current year) to this file.

Apply this diff to extend typing imports:

-from typing import Iterable, List, Optional, Tuple, Union
+from typing import Iterable, List, Optional, Tuple, Union, Callable, Any

891-895: Prefer explicit exception over assert for production behavior

assert can be stripped with Python optimizations (-O), silently removing the check. Throw a RuntimeError (or a domain-specific exception) instead.

-        else:
-            assert scheduled_batch.batch_size > 0, (
-                "fail to schedule any pending request, "
-                "probably run out of resource.")
+        else:
+            if scheduled_batch.batch_size <= 0:
+                raise RuntimeError(
+                    "fail to schedule any pending request, probably run out of resource."
+                )

1398-1429: Solid centralization; align naming with snake_case and add minimal typing/logging enhancements

  • Rename atLeastNum to at_least_num (PEP8).
  • Type method_call to Callable[[int], Any] and add a return type.
  • Optionally log error_code if present on the exception to aid triage.
-    def _check_cache_transfer_status_helper(self,
-                                            method_name: str,
-                                            method_call,
-                                            atLeastNum: int = 0):
-        """Helper method to handle cache transfer status checking with error handling."""
+    def _check_cache_transfer_status_helper(
+        self,
+        method_name: str,
+        method_call: Callable[[int], Any],
+        at_least_num: int = 0,
+    ) -> None:
+        """Helper to check cache-transfer status with per-request and system error handling."""
         try:
-            method_call(atLeastNum)
+            method_call(at_least_num)
         except RequestSpecificException as e:
             error_msg = str(e)
-            logger.error(
-                f"Encountered a request-specific error in {method_name}: {error_msg}"
-            )
-            request_ids = [e.request_id]
-            self._handle_errors(error_msg, request_ids)
+            # Log error_code if exposed by the binding (best-effort)
+            err_code = getattr(e, "error_code", None)
+            rid = getattr(e, "request_id", None)
+            extra = f" [request_id={rid}, error_code={err_code}]" if rid is not None else ""
+            logger.error(f"Encountered a request-specific error in {method_name}: {error_msg}{extra}")
+            self._handle_errors(error_msg, [rid] if rid is not None else None)
         except Exception as e:
             error_msg = str(e)
             logger.error(
                 f"Encountered a system error in {method_name}: {error_msg}")
             self._handle_errors(error_msg)
 
-    @nvtx_range("_check_disagg_ctx_cache_transfer_status")
-    def _check_disagg_ctx_cache_transfer_status(self, atLeastNum: int = 0):
+    @nvtx_range("_check_disagg_ctx_cache_transfer_status")
+    def _check_disagg_ctx_cache_transfer_status(self, at_least_num: int = 0) -> None:
         self._check_cache_transfer_status_helper(
             "checking context transfer status",
-            self.kv_cache_transceiver.check_context_transfer_status, atLeastNum)
+            self.kv_cache_transceiver.check_context_transfer_status, at_least_num)
 
-    @nvtx_range("_check_disagg_gen_cache_transfer_status")
-    def _check_disagg_gen_cache_transfer_status(self, atLeastNum: int = 0):
+    @nvtx_range("_check_disagg_gen_cache_transfer_status")
+    def _check_disagg_gen_cache_transfer_status(self, at_least_num: int = 0) -> None:
         self._check_cache_transfer_status_helper(
             "checking generation transfer status",
-            self.kv_cache_transceiver.check_gen_transfer_status, atLeastNum)
+            self.kv_cache_transceiver.check_gen_transfer_status, at_least_num)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c42b0c8 and ee1be11.

📒 Files selected for processing (4)
  • cpp/include/tensorrt_llm/common/tllmException.h (2 hunks)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp (3 hunks)
  • cpp/tensorrt_llm/common/tllmException.cpp (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (9 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
  • cpp/tensorrt_llm/common/tllmException.cpp
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Header files must use include guards named TRTLLM__H without underscores prefix/suffix (e.g., TRTLLM_FOO_BAR_HELLO_H)

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
🧬 Code Graph Analysis (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (5)
cpp/include/tensorrt_llm/common/tllmException.h (2)
  • tensorrt_llm (33-84)
  • RequestSpecificException (67-82)
tensorrt_llm/logger.py (1)
  • error (125-126)
tensorrt_llm/_utils.py (1)
  • nvtx_range (843-862)
tensorrt_llm/_torch/pyexecutor/kv_cache_transceiver.py (4)
  • check_context_transfer_status (81-82)
  • check_context_transfer_status (119-120)
  • check_gen_transfer_status (85-86)
  • check_gen_transfer_status (122-123)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
  • LlmResponse (261-268)
cpp/include/tensorrt_llm/common/tllmException.h (3)
cpp/tensorrt_llm/batch_manager/dataTransceiver.h (3)
  • tensorrt_llm (34-163)
  • nodiscard (220-248)
  • `` (64-67)
cpp/tensorrt_llm/pybind/common/tllmExceptions.h (1)
  • tensorrt_llm (25-32)
cpp/tensorrt_llm/common/tllmException.cpp (6)
  • RequestSpecificException (110-117)
  • RequestSpecificException (119-119)
  • getRequestId (121-124)
  • getRequestId (121-121)
  • getErrorCode (126-129)
  • getErrorCode (126-126)
🔇 Additional comments (7)
cpp/include/tensorrt_llm/common/tllmException.h (2)

21-22: Include additions are correct and necessary

Using (not <stdint.h>) and aligns with the guidelines and supports the new enum and kUNKNOWN_REQUEST_ID.


46-47: LGTM: clear sentinel for unknown request ID

Using std::numeric_limits<uint64_t>::max() is a robust sentinel choice and fits the naming/style guidelines for constants.

tensorrt_llm/_torch/pyexecutor/py_executor.py (5)

690-691: LGTM: switched to centralized context-transfer status check

Using _check_disagg_ctx_cache_transfer_status(1) here is consistent with the new centralized wrapper.


1265-1266: LGTM: generation-transfer status routed through helper

This aligns with the abstraction and avoids duplicating try/except/logging.


1368-1369: LGTM: disagg gen status check with conditional at_least_num consolidation

The helper usage reads clearly and matches the intent.


1388-1389: LGTM: centralized context-transfer status check after send

Good placement to surface request-specific errors immediately after initiating context transfer.


25-25: RequestSpecificException bindings expose request_id and error_code — optional log suggestion

Verified: both pybind and nanobind bindings add request_id and error_code (class attrs) and the exception translators set instance attrs from getRequestId() / getErrorCode(); tests also check the attributes exist.

Relevant files:

  • cpp/tensorrt_llm/pybind/common/tllmExceptions.cpp — sets request_id/error_code on the Python exception class and sets instance attrs in the translator.
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp — same for nanobind.
  • cpp/tensorrt_llm/common/tllmException.cpp — getRequestId() / getErrorCode() implementations.
  • tests/unittest/others/test_exception.py — asserts hasattr(e, 'request_id') / hasattr(e, 'error_code').
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (catch at ~line 1405) — current catch logs the message; consider adding error_code for diagnosability.

Optional logging suggestion (safe when attributes may be None):
logger.error(
"RequestSpecificException: %s (request_id=%s, error_code=%s)",
error_msg,
getattr(e, "request_id", None),
getattr(e, "error_code", None),
)

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15419 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15419 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11621 completed with status: 'FAILURE'

@Shunkangz Shunkangz force-pushed the request_specific_exception branch from bd14b1d to fd0e36e Compare August 15, 2025 09:03
@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15426 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
cpp/include/tensorrt_llm/common/tllmException.h (1)

40-43: Add a short alias and a throw-helper for request exceptions (echoing prior feedback on macro length)

A shorter alias improves ergonomics, and a throw-helper matches TLLM_THROW. Keep the existing macro for compatibility and add non-breaking helpers.

 #define TLLM_REQUEST_EXCEPTION(requestID, errorCode, ...)                                                              \
     tensorrt_llm::common::RequestSpecificException(                                                                    \
         __FILE__, __LINE__, tensorrt_llm::common::fmtstr(__VA_ARGS__).c_str(), requestID, errorCode)
+
+// Short alias and throw-helper for request-scoped exceptions.
+#define TLLM_REQ_EXCEPTION(requestId, errorCode, ...)                                                                  \
+    TLLM_REQUEST_EXCEPTION(requestId, errorCode, __VA_ARGS__)
+#define TLLM_THROW_REQUEST(requestId, errorCode, ...)                                                                  \
+    do                                                                                                                 \
+    {                                                                                                                  \
+        throw TLLM_REQ_EXCEPTION(requestId, errorCode, __VA_ARGS__);                                                   \
+    } while (0)
🧹 Nitpick comments (3)
cpp/include/tensorrt_llm/common/tllmException.h (3)

17-17: Guideline check: include guards vs. pragma once

The project guideline mandates header include guards named TRTLLM__H. This file uses only #pragma once. If the codebase standardizes on #pragma once, ignore this; otherwise, consider adding guards for consistency and portability.

Example guard for this file:

#ifndef TRTLLM_TLLM_EXCEPTION_H
#define TRTLLM_TLLM_EXCEPTION_H
// ... file content ...
#endif // TRTLLM_TLLM_EXCEPTION_H

47-47: Use Doxygen //! per project style

Switch /// @brief comments to //! to match documented style.

-/// @brief Enumeration of different error codes for request-specific exceptions
+//! Enumeration of different error codes for request-specific exceptions
@@
-/// @brief Constant for unknown request ID
+//! Constant for unknown request ID

Also applies to: 57-57


88-104: Align “Id” casing and constructor signature; consider consistency with TllmException

  • Naming: method is getRequestId() but members/params use requestID/mRequestID. Prefer lowerCamelCase Id everywhere.
  • Signature: file is char const* in TllmException but std::string const& here. Using char const* avoids constructing a temporary std::string for __FILE__ and keeps APIs consistent.

Apply in-header changes:

 class RequestSpecificException : public std::runtime_error
 {
 public:
-    explicit RequestSpecificException(
-        std::string const& file, std::size_t line, char const* msg, uint64_t requestID, RequestErrorCode errorCode);
+    explicit RequestSpecificException(
+        char const* file, std::size_t line, char const* msg, uint64_t requestId, RequestErrorCode errorCode);
@@
-    [[nodiscard]] uint64_t getRequestId() const noexcept;
+    [[nodiscard]] uint64_t getRequestId() const noexcept;
@@
 private:
-    uint64_t mRequestID;
+    uint64_t mRequestId;
     RequestErrorCode mErrorCode;
 };

And update the implementation accordingly (outside this header):

// cpp/tensorrt_llm/common/tllmException.cpp
// Constructor signature change (file: char const*, requestId name) and member rename.
RequestSpecificException::RequestSpecificException(
    char const* file, std::size_t line, char const* msg, uint64_t requestId, RequestErrorCode errorCode)
    : std::runtime_error{fmtstr("%s (Request ID: %" PRIu64 ", Error Code: %u) (%s:%zu)",
          msg, requestId, static_cast<uint32_t>(errorCode), file, line)}
    , mRequestId{requestId}
    , mErrorCode{errorCode}
{}

Optional design note: if much of the codebase catches TllmException to leverage backtraces, consider deriving RequestSpecificException from TllmException instead of std::runtime_error for consistency. If consumers already catch std::exception or you wish to avoid callstack capture overhead, the current base is fine—just confirm catch sites.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ee1be11 and fd0e36e.

📒 Files selected for processing (5)
  • cpp/include/tensorrt_llm/common/tllmException.h (3 hunks)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp (3 hunks)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h (1 hunks)
  • cpp/tensorrt_llm/common/tllmException.cpp (2 hunks)
  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.h
  • cpp/tensorrt_llm/common/tllmException.cpp
  • cpp/tensorrt_llm/executor/cache_transmission/ucx_utils/connection.cpp
  • cpp/tensorrt_llm/batch_manager/dataTransceiver.cpp
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Header files must use include guards named TRTLLM__H without underscores prefix/suffix (e.g., TRTLLM_FOO_BAR_HELLO_H)

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/include/tensorrt_llm/common/tllmException.h
🧬 Code Graph Analysis (1)
cpp/include/tensorrt_llm/common/tllmException.h (3)
cpp/tensorrt_llm/batch_manager/dataTransceiver.h (2)
  • tensorrt_llm (34-163)
  • `` (64-67)
cpp/tensorrt_llm/pybind/common/tllmExceptions.h (1)
  • tensorrt_llm (25-32)
cpp/tensorrt_llm/common/tllmException.cpp (6)
  • RequestSpecificException (110-117)
  • RequestSpecificException (119-119)
  • getRequestId (121-124)
  • getRequestId (121-121)
  • getErrorCode (126-129)
  • getErrorCode (126-126)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
cpp/include/tensorrt_llm/common/tllmException.h (2)

23-24: Good: required standard headers added

and are appropriate for fixed-width types and std::numeric_limits used below.


47-56: LGTM: strongly-typed request error codes

Scoped enum with explicit underlying type is good; values and documented ranges are clear.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15426 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11627 completed with status: 'FAILURE'

@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp (1)

54-56: Cast error_code to the bound Python enum, not uint32_t.

This preserves enum semantics on the Python side and matches how RequestErrorCode is exported.

-                nb::object error_code = nb::cast(static_cast<uint32_t>(e.getErrorCode()));
+                nb::object error_code = nb::cast(e.getErrorCode());
🧹 Nitpick comments (3)
cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp (3)

1-15: Update copyright year to current year (2025).

Coding guidelines require prepending the NVIDIA copyright header with the current year. Please update 2022-2024 → 2022-2025.

- * Copyright (c) 2022-2024, NVIDIA CORPORATION.  All rights reserved.
+ * Copyright (c) 2022-2025, NVIDIA CORPORATION.  All rights reserved.

36-41: Avoid function-local static nb::object; use lowerCamelCase and remove the lifetime hazard.

  • Function-local static nb::object can have teardown ordering issues at interpreter shutdown (nb::object dtor after Python finalization). It’s safer not to keep it static.
  • Local variable naming should use lowerCamelCase (coding guidelines).
  • You can avoid holding the type at all by using the instance’s type in PyErr_SetObject.

Apply this diff:

-    // Create the RequestSpecificException Python exception class
-    static nb::object request_specific_exc = nb::exception<tc::RequestSpecificException>(m, "RequestSpecificException");
+    // Create the RequestSpecificException Python exception class
+    nb::object requestSpecificExc = nb::exception<tc::RequestSpecificException>(m, "RequestSpecificException");

-    // Add attributes to the Python exception class
-    request_specific_exc.attr("request_id") = nb::none();
-    request_specific_exc.attr("error_code") = nb::none();
+    // Add attributes to the Python exception class
+    requestSpecificExc.attr("request_id") = nb::none();
+    requestSpecificExc.attr("error_code") = nb::none();

-                PyErr_SetObject(request_specific_exc.ptr(), py_exc.ptr());
+                // Use the actual exception instance type to raise
+                PyErr_SetObject(Py_TYPE(py_exc.ptr()), py_exc.ptr());

Also applies to: 62-62


42-65: Optional: simplify translator signature by removing the unused void argument.*

If nanobind in this repo supports the single-argument translator overload, dropping the unused parameter clarifies intent. Otherwise, ignore.

-    nb::register_exception_translator(
-        [](std::exception_ptr const& p, void*)
+    nb::register_exception_translator(
+        [](std::exception_ptr const& p)
         {
             try
             {
                 if (p)
                     std::rethrow_exception(p);
             }
             catch (const tc::RequestSpecificException& e)
             {
                 // ...
             }
-        });
+        });
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fd0e36e and 59ffd57.

📒 Files selected for processing (3)
  • cpp/tensorrt_llm/nanobind/CMakeLists.txt (1 hunks)
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp (1 hunks)
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.h
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
cpp/tensorrt_llm/nanobind/common/tllmExceptions.cpp (1)

30-33: Verify completeness of RequestErrorCode exposure to Python.

Only UNKNOWN_ERROR and NETWORK_ERROR are exported. If new enum values are added on the C++ side, Python won’t see them and behavior may silently diverge. Consider:

  • Ensuring tests cover roundtripping all RequestErrorCode values to/from Python.
  • If available, binding all enum values (or asserting the count matches an expected sentinel like kNUM_*).

No changes required if the enum is intentionally limited at this time, but please confirm.

cpp/tensorrt_llm/nanobind/CMakeLists.txt (1)

12-12: LGTM: new binding source added to module.

The addition of common/tllmExceptions.cpp to SRCS looks correct and ensures the translator is compiled into the nanobind module.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15503 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15503 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11672 completed with status: 'FAILURE'

@Shunkangz Shunkangz force-pushed the request_specific_exception branch from 59ffd57 to 6f78366 Compare August 16, 2025 10:01
@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15513 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15513 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11681 completed with status: 'FAILURE'

@Shunkangz Shunkangz force-pushed the request_specific_exception branch from 6f78366 to 23ba9c2 Compare August 18, 2025 02:23
@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17289 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12995 completed with status: 'FAILURE'

@Shunkangz Shunkangz force-pushed the request_specific_exception branch from 7264ff9 to 40a6180 Compare September 2, 2025 14:30
@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17373 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17373 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13058 completed with status: 'FAILURE'

@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17434 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17434 [ run ] completed with state ABORTED

Shunkang added 14 commits September 4, 2025 07:26
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: Shunkang <[email protected]>
@Shunkangz Shunkangz force-pushed the request_specific_exception branch from 40a6180 to 0e37973 Compare September 4, 2025 07:31
@Shunkangz
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17648 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17648 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13267 completed with status: 'SUCCESS'

@Shunkangz Shunkangz merged commit bddf183 into NVIDIA:main Sep 4, 2025
5 checks passed
Wong4j pushed a commit to Wong4j/TensorRT-LLM that referenced this pull request Sep 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants