-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[None][chore] Remove onboard block switch for KV cache manager #7469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughRemoves the onboardBlocks parameter and related logic from KV cache components across headers, implementations, bindings, serialization, and tests. Constructor signatures and parameter ordering are updated accordingly. Offload/onboard gating tied to onboardBlocks is eliminated. Python bindings and serialization schemas drop the onboard_blocks field. Tests adjusted to new APIs. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant KVCacheManager
participant BlockManager
participant PrimaryPool
participant SecondaryPool
Client->>KVCacheManager: requestBlock()
KVCacheManager->>BlockManager: getFreeBlock()
alt primary has free block
BlockManager->>PrimaryPool: allocate()
PrimaryPool-->>BlockManager: block
else primary needs space
BlockManager->>SecondaryPool: offload eligible blocks
SecondaryPool-->>BlockManager: offloaded
BlockManager->>PrimaryPool: allocate()
PrimaryPool-->>BlockManager: block
end
BlockManager-->>KVCacheManager: block
KVCacheManager-->>Client: block
note over BlockManager,SecondaryPool: Onboarding/offloading no longer gated by onboardBlocks flag
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
cpp/tensorrt_llm/executor/serialization.cpp (1)
1173-1193
: maxGpuTotalBytes is never serialized; value is lost across process boundaries.KvCacheConfig exposes getMaxGpuTotalBytes()/ctor param, but serialize/deserialize/serializedSize omit it. This silently resets to 0 after (de)serialization.
Apply this diff to persist the field:
@@ KvCacheConfig Serialization::deserializeKvCacheConfig(std::istream& is) - auto attentionDpEventsGatherPeriodMs = su::deserialize<SizeType32>(is); + auto attentionDpEventsGatherPeriodMs = su::deserialize<SizeType32>(is); + auto maxGpuTotalBytes = su::deserialize<uint64_t>(is); @@ - return KvCacheConfig{enableBlockReuse, maxTokens, maxAttentionWindowVec, sinkTokenLength, freeGpuMemoryFraction, - hostCacheSize, crossKvCacheFraction, secondaryOffloadMinPriority, eventBufferMaxSize, enablePartialReuse, - copyOnPartialReuse, useUvm, attentionDpEventsGatherPeriodMs}; + return KvCacheConfig{enableBlockReuse, maxTokens, maxAttentionWindowVec, sinkTokenLength, freeGpuMemoryFraction, + hostCacheSize, crossKvCacheFraction, secondaryOffloadMinPriority, eventBufferMaxSize, enablePartialReuse, + copyOnPartialReuse, useUvm, attentionDpEventsGatherPeriodMs, std::nullopt, maxGpuTotalBytes};@@ void Serialization::serialize(KvCacheConfig const& kvCacheConfig, std::ostream& os) su::serialize(kvCacheConfig.getSecondaryOffloadMinPriority(), os); su::serialize(kvCacheConfig.getEventBufferMaxSize(), os); su::serialize(kvCacheConfig.getUseUvm(), os); su::serialize(kvCacheConfig.getAttentionDpEventsGatherPeriodMs(), os); + su::serialize(kvCacheConfig.getMaxGpuTotalBytes(), os);
@@ size_t Serialization::serializedSize(KvCacheConfig const& kvCacheConfig) totalSize += su::serializedSize(kvCacheConfig.getEventBufferMaxSize()); totalSize += su::serializedSize(kvCacheConfig.getUseUvm()); totalSize += su::serializedSize(kvCacheConfig.getAttentionDpEventsGatherPeriodMs()); + totalSize += su::serializedSize(kvCacheConfig.getMaxGpuTotalBytes()); return totalSize;
Note: adding this also changes the wire format; consider coupling with the versioning suggestion above.
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp (3)
109-121
: Pickle schema mismatch: getstate emits 14 fields, but setstate still requires 15.This breaks round-trip pickling and will throw at runtime. Also keeps the removed onboard_blocks in the tuple layout.
Apply backward-compatible fix (accept 14 or 15; ignore deprecated onboard_blocks at index 6 when present):
- auto kvCacheConfigSetstate = [](py::tuple const& state) + auto kvCacheConfigSetstate = [](py::tuple const& state) { - if (state.size() != 15) + if (state.size() != 14 && state.size() != 15) { throw std::runtime_error("Invalid state!"); } - return tle::KvCacheConfig(state[0].cast<bool>(), state[1].cast<std::optional<SizeType32>>(), - state[2].cast<std::optional<std::vector<SizeType32>>>(), state[3].cast<std::optional<SizeType32>>(), - state[4].cast<std::optional<float>>(), state[5].cast<std::optional<size_t>>(), state[6].cast<bool>(), - state[7].cast<std::optional<float>>(), state[8].cast<std::optional<tle::RetentionPriority>>(), - state[9].cast<size_t>(), state[10].cast<bool>(), state[11].cast<bool>(), state[12].cast<bool>(), - state[13].cast<SizeType32>(), std::nullopt, state[14].cast<uint64_t>()); + auto const shift = (state.size() == 15) ? 1 : 0; // ignore deprecated onboard_blocks at state[6] + return tle::KvCacheConfig( + state[0].cast<bool>(), + state[1].cast<std::optional<SizeType32>>(), + state[2].cast<std::optional<std::vector<SizeType32>>>(), + state[3].cast<std::optional<SizeType32>>(), + state[4].cast<std::optional<float>>(), + state[5].cast<std::optional<size_t>>(), + state[6 + shift].cast<std::optional<float>>(), + state[7 + shift].cast<std::optional<tle::RetentionPriority>>(), + state[8 + shift].cast<size_t>(), + state[9 + shift].cast<bool>(), + state[10 + shift].cast<bool>(), + state[11 + shift].cast<bool>(), + state[12 + shift].cast<SizeType32>(), + std::nullopt, + state[13 + shift].cast<uint64_t>()); };
123-135
: Constructor binding still exposes removed onboard_blocks parameter.This contradicts the PR goal and likely won’t compile against the updated C++ API.
Remove the boolean from the ctor signature and the corresponding arg:
- .def(py::init<bool, std::optional<SizeType32> const&, std::optional<std::vector<SizeType32>> const&, - std::optional<SizeType32> const&, std::optional<float> const&, std::optional<size_t> const&, bool, + .def(py::init<bool, std::optional<SizeType32> const&, std::optional<std::vector<SizeType32>> const&, + std::optional<SizeType32> const&, std::optional<float> const&, std::optional<size_t> const&, std::optional<float> const&, std::optional<tle::RetentionPriority>, size_t const&, bool, bool, bool, SizeType32, std::optional<RuntimeDefaults> const&, uint64_t const&>(), @@ - py::arg("free_gpu_memory_fraction") = py::none(), py::arg("host_cache_size") = py::none(), - py::arg("onboard_blocks") = true, py::arg("cross_kv_cache_fraction") = py::none(), + py::arg("free_gpu_memory_fraction") = py::none(), py::arg("host_cache_size") = py::none(), + py::arg("cross_kv_cache_fraction") = py::none(),
101-108
: Align kvCacheConfig pickle getstate/setstate with ctor signature
- kvCacheConfigGetstate returns 14 elements, but kvCacheConfigSetstate still checks for 15 (
if (state.size() != 15)
), so unpickling always fails.- setstate casts
state[6]
tobool
(presumably for the removedonboard_blocks
) and passes it into the C++ ctor’scrossKvCacheFraction
parameter—mismapping both types and positions.- Update
kvCacheConfigSetstate
to expect 14 fields, correct the index offsets after removingonboard_blocks
, and adjust thestate.size()
check and argument order to match the 15-parameter C++ ctor (withruntimeDefaults
defaulted) exactly.cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp (3)
117-131
: Pickle schema mismatch (14 vs 15) and lingering onboard in setstate.Same issue as pybind: runtime error on unpickle and stale flag handling.
- auto kvCacheConfigSetstate = [](tle::KvCacheConfig& self, nb::tuple const& state) + auto kvCacheConfigSetstate = [](tle::KvCacheConfig& self, nb::tuple const& state) { - if (state.size() != 15) + if (state.size() != 14 && state.size() != 15) { throw std::runtime_error("Invalid state!"); } - new (&self) tle::KvCacheConfig(nb::cast<bool>(state[0]), nb::cast<std::optional<SizeType32>>(state[1]), - nb::cast<std::optional<std::vector<SizeType32>>>(state[2]), nb::cast<std::optional<SizeType32>>(state[3]), - nb::cast<std::optional<float>>(state[4]), nb::cast<std::optional<size_t>>(state[5]), - nb::cast<bool>(state[6]), nb::cast<std::optional<float>>(state[7]), - nb::cast<std::optional<tle::RetentionPriority>>(state[8]), nb::cast<size_t>(state[9]), - nb::cast<bool>(state[10]), nb::cast<bool>(state[11]), nb::cast<bool>(state[12]), - nb::cast<SizeType32>(state[13]), std::nullopt, nb::cast<uint64_t>(state[14])); + int const shift = (state.size() == 15) ? 1 : 0; // ignore deprecated onboard_blocks + new (&self) tle::KvCacheConfig( + nb::cast<bool>(state[0]), + nb::cast<std::optional<SizeType32>>(state[1]), + nb::cast<std::optional<std::vector<SizeType32>>>(state[2]), + nb::cast<std::optional<SizeType32>>(state[3]), + nb::cast<std::optional<float>>(state[4]), + nb::cast<std::optional<size_t>>(state[5]), + nb::cast<std::optional<float>>(state[6 + shift]), + nb::cast<std::optional<tle::RetentionPriority>>(state[7 + shift]), + nb::cast<size_t>(state[8 + shift]), + nb::cast<bool>(state[9 + shift]), + nb::cast<bool>(state[10 + shift]), + nb::cast<bool>(state[11 + shift]), + nb::cast<SizeType32>(state[12 + shift]), + std::nullopt, + nb::cast<uint64_t>(state[13 + shift])); };
131-144
: Constructor binding still includes onboard_blocks.Remove the boolean and the nb::arg to reflect the C++ API.
- .def(nb::init<bool, std::optional<SizeType32> const&, std::optional<std::vector<SizeType32>> const&, - std::optional<SizeType32> const&, std::optional<float> const&, std::optional<size_t> const&, bool, + .def(nb::init<bool, std::optional<SizeType32> const&, std::optional<std::vector<SizeType32>> const&, + std::optional<SizeType32> const&, std::optional<float> const&, std::optional<size_t> const&, std::optional<float> const&, std::optional<tle::RetentionPriority>, size_t const&, bool, bool, bool, SizeType32, std::optional<RuntimeDefaults> const&, uint64_t const&>(), @@ - nb::arg("free_gpu_memory_fraction") = nb::none(), nb::arg("host_cache_size") = nb::none(), - nb::arg("onboard_blocks") = true, nb::arg("cross_kv_cache_fraction") = nb::none(), + nb::arg("free_gpu_memory_fraction") = nb::none(), nb::arg("host_cache_size") = nb::none(), + nb::arg("cross_kv_cache_fraction") = nb::none(),
109-116
: Align__getstate__
tuple with the updated KvCacheConfig constructor signature.
In cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp (lines 109–116),__getstate__
currently returns 14 elements but the nanobind__init__
and C++ ctor expect 15 parameters (including the newruntime_defaults
). Update the tuple (and adjust the__setstate__
size check) so it emits—and consumes—all fields in the exact constructor order to prevent mis-serialization.cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp (1)
455-470
: Fix pybind KVCacheManager constructor type list (stream/max_sequence_length types are wrong).The py::init<> template uses bool,int64_t where stream and optional max_sequence_length are expected. This will break bindings.
- .def(py::init<std::vector<SizeType32> const&, SizeType32, SizeType32, - std::map<SizeType32, std::tuple<SizeType32, SizeType32>> const&, SizeType32, SizeType32, - std::vector<SizeType32> const&, std::optional<tbk::TempAttentionWindowInputs> const&, - nvinfer1::DataType, SizeType32, bool, int64_t, bool, tbk::CacheType, + .def(py::init<std::vector<SizeType32> const&, SizeType32, SizeType32, + std::map<SizeType32, std::tuple<SizeType32, SizeType32>> const&, SizeType32, SizeType32, + std::vector<SizeType32> const&, std::optional<tbk::TempAttentionWindowInputs> const&, + nvinfer1::DataType, SizeType32, CudaStreamPtr, std::optional<SizeType32>, bool, tbk::CacheType, std::optional<tensorrt_llm::executor::RetentionPriority>, std::shared_ptr<tbk::KVCacheEventManager>, bool, bool, std::shared_ptr<tbc::KvCacheConnectorManager>>(), @@ - py::arg("sink_token_length"), py::arg("stream"), py::arg("max_sequence_length"), + py::arg("sink_token_length"), py::arg("stream"), py::arg("max_sequence_length"),Also re-run all call sites in Python to ensure “onboard_blocks” kwargs are removed (see _torch.resource_manager).
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (1)
1386-1391
: Fix parameter name casing:copyOnpartialReuse
→copyOnPartialReuse
Rename the parameter in allKVCacheManager
overloads (cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h) at lines 1389, 1400, 1411, and 1420 to match the.cpp
implementation and avoid inconsistencies in generated/binding code.Apply:
--- a/cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h +++ b/cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h @@ -1386,7 +1386,7 @@ - bool copyOnpartialReuse = true, + bool copyOnPartialReuse = true, @@ -1397,7 +1397,7 @@ - bool copyOnpartialReuse = true, + bool copyOnPartialReuse = true, @@ -1409,7 +1409,7 @@ - bool copyOnpartialReuse = true, + bool copyOnPartialReuse = true, @@ -1419,7 +1419,7 @@ - bool copyOnpartialReuse = true); + bool copyOnPartialReuse = true);
🧹 Nitpick comments (6)
cpp/include/tensorrt_llm/executor/executor.h (1)
1-15
: Update copyright year range to include 2025.Header shows 2022-2024; repository guideline asks for current year on touched files.
- * Copyright (c) 2022-2024, NVIDIA CORPORATION. All rights reserved. + * Copyright (c) 2022-2025, NVIDIA CORPORATION. All rights reserved.cpp/tensorrt_llm/executor/kvCacheConfig.cpp (1)
197-201
: Optional setter ergonomics.getHostCacheSize() is optional but only a size_t setter exists. Consider an overload to clear the value.
Example (header + impl):
- void setHostCacheSize(std::optional<size_t> hostCacheSize);
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp (1)
2-2
: Update SPDX year range to include 2025.Keep headers consistent with other updated files.
- * SPDX-FileCopyrightText: Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. + * SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp (1)
548-551
: Use 0 (SizeType32) instead of false for sinkTokenLength.Passing a bool where SizeType32 is expected is confusing and risks overload mismatches. Use 0 for clarity and to match other sites.
- beamWidth, std::vector<BlockManager::SizeType32>{maxAttentionWindow}, std::nullopt, nvinfer1::DataType::kFP4, - false, stream, true); + beamWidth, std::vector<BlockManager::SizeType32>{maxAttentionWindow}, std::nullopt, nvinfer1::DataType::kFP4, + 0, stream, true); @@ - beamWidth, std::vector<BlockManager::SizeType32>{maxAttentionWindow}, std::nullopt, nvinfer1::DataType::kHALF, - false, stream, true); + beamWidth, std::vector<BlockManager::SizeType32>{maxAttentionWindow}, std::nullopt, nvinfer1::DataType::kHALF, + 0, stream, true); @@ - beamWidth, std::vector<BlockManager::SizeType32>{maxAttentionWindow}, std::nullopt, nvinfer1::DataType::kHALF, - false, stream, true); + beamWidth, std::vector<BlockManager::SizeType32>{maxAttentionWindow}, std::nullopt, nvinfer1::DataType::kHALF, + 0, stream, true);Also applies to: 2101-2104, 2175-2178
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp (2)
934-952
: Onboarding event emission: align condition with offload pathOffload path checks mEventManager && blockInRadixTree(block). Onboard path checks only mEventManager. For consistency and to avoid spurious events for non-radix nodes, gate on blockInRadixTree as well.
- if (mEventManager) + if (mEventManager && blockInRadixTree(offloadBlock)) { mEventManager->enqueueUpdatedEvent( tle::KVCacheUpdatedData(offloadBlock->getHash()).cacheLevelUpdated(kSecondaryLevel, kPrimaryLevel), mWindowSize); }
2262-2271
: Nit: log message spelling"secondayBlocks" → "secondaryBlocks".
- TLLM_LOG_INFO( - "[windowSize=%d] {.primaryBlocks=%d, .secondayBlocks=%d}", windowSize, primaryBlocks, secondayBlocks); + TLLM_LOG_INFO( + "[windowSize=%d] {.primaryBlocks=%d, .secondaryBlocks=%d}", windowSize, primaryBlocks, secondayBlocks);
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (11)
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
(6 hunks)cpp/include/tensorrt_llm/executor/executor.h
(1 hunks)cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
(12 hunks)cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
(1 hunks)cpp/tensorrt_llm/executor/kvCacheConfig.cpp
(1 hunks)cpp/tensorrt_llm/executor/serialization.cpp
(1 hunks)cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
(1 hunks)cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
(1 hunks)cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
(1 hunks)cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
(30 hunks)cpp/tests/unit_tests/executor/serializeUtilsTest.cpp
(0 hunks)
💤 Files with no reviewable changes (1)
- cpp/tests/unit_tests/executor/serializeUtilsTest.cpp
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh}
: Closing braces of C++ namespaces must include a comment naming the namespace (e.g., } // namespace foo)
Avoid using literals (except 0, nullptr, true, false) directly in logic; use named constants for comparisons
Use Allman brace style in C++
Place semicolon of empty for/while loop on its own line
Use brace-delimited statements for bodies of switch/while/do/for and always brace if/else bodies
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Non-static, externally visible globals use g prefix with lowerCamelCase (e.g., gDontUseGlobalFoos)
Static or anonymous-namespace globals use s prefix with lowerCamelCase (e.g., sMutableStaticGlobal)
Locally visible static variables use s prefix (e.g., static std::once_flag sFlag)
Member variables use m prefix with CamelCase (public may omit but encouraged)
Constants (enums, globals, static consts, function-scope magic numbers) use k prefix with UPPER_SNAKE (e.g., kDIGIT_NUM)
Function-scope non-literal, non-magic constants use normal non-const naming (e.g., const bool pass)
If macros are necessary, name them in UPPER_SNAKE_CASE
Avoid Hungarian notation except allowed app’s hungarian like nb for counts
Constructor parameters conflicting with member names get a trailing underscore (e.g., foo_)
Use uppercase literal suffixes (e.g., 1234L not 1234l)
Format C++ with clang-format (LLVM style), max line length 120; justify any exceptions with clang-format off/on blocks
Use C++-style comments; C comments not allowed except special inline cases; single-line comments use //
Use inline parameter comments in calls when arguments aren’t obvious (e.g., /* checkForErrors = / false)
Disable code with #if/#endif (optionally mnemonic conditions or no-op macros); do not comment out code; avoid dead code
Use the least forceful C++ cast; avoid removing const/volatile; avoid C-style and functional casts (except explicit constructors); cast void to T* with static_cas...
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/include/tensorrt_llm/executor/executor.h
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
**/*.{cc,cpp,cxx,cu}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cc,cpp,cxx,cu}
: Prefer const or constexpr variables over #define for constants in C++
Declare variables const if not modified after initialization
Use smart pointers for heap allocation; prefer unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only exceptionally; avoid deprecated smart pointers
Avoid declaring large functions inline unless there’s a quantifiable benefit; remember in-class definitions are implicitly inline
Every defined function must be referenced at least once; avoid unused methods
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
**/*
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Filenames compiled into a target must be case-insensitively unique
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/include/tensorrt_llm/executor/executor.h
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use spaces, not tabs; indent 4 spaces
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/include/tensorrt_llm/executor/executor.h
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/include/tensorrt_llm/executor/executor.h
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
**/*.{h,hpp,hh,hxx}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{h,hpp,hh,hxx}
: Prefer const or constexpr over #define for constants in C++ headers
Use Doxygen for documenting interfaces; use //! for comments and //!< for member annotations in C++
Use include guards in headers with symbol format TRTLLM__H (no underscores prefix/suffix; filename only)
Files:
cpp/include/tensorrt_llm/executor/executor.h
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
🧠 Learnings (6)
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tensorrt_llm/pybind/executor/executorConfig.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/nanobind/executor/executorConfig.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/tensorrt_llm/executor/serialization.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/tensorrt_llm/pybind/batch_manager/kvCacheManager.cpp
cpp/tensorrt_llm/executor/kvCacheConfig.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: There is a planned refactoring to move cache block bookkeeping utilities from BlockManager/WindowBlockManager into the GenerationRequest class itself to improve code organization and make responsibilities clearer.
Applied to files:
cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp
🧬 Code graph analysis (1)
cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp (2)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (1)
KVCacheManager
(142-1011)cpp/tests/unit_tests/batch_manager/cacheTransBufferTest.cpp (4)
numLayers
(32-69)numLayers
(32-33)numLayers
(77-88)numLayers
(77-77)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (14)
cpp/include/tensorrt_llm/executor/executor.h (1)
1000-1011
: KvCacheConfig ctor signature change looks consistent; verify downstream/bindings.Order now: enableBlockReuse, maxTokens, maxAttentionWindowVec, sinkTokenLength, freeGpuMemoryFraction, hostCacheSize, crossKvCacheFraction, secondaryOffloadMinPriority, eventBufferMaxSize, enablePartialReuse, copyOnPartialReuse, useUvm, attentionDpEventsGatherPeriodMs, [runtimeDefaults], [maxGpuTotalBytes]. This matches serialization.cpp’s construction. Please confirm all callers (pybind/nanobind/tests) updated.
cpp/tensorrt_llm/executor/serialization.cpp (1)
1156-1176
: Preserve serialization compatibility for KvCacheConfig
Either introduce a version tag forKvCacheConfig
serialization or keep a reservedbool
in place of the removedonboardBlocks
(read and ignore) to maintain the original byte layout. At minimum, document this breaking change in the RELEASE notes.cpp/tensorrt_llm/executor/kvCacheConfig.cpp (2)
24-31
: Ctor reorder/removal of onboard flag is clean; validations preserved.Parameter order matches header and serialization; runtimeDefaults/maxGpuTotalBytes handling remains intact.
69-71
: Good: guard against non-positive gather period.Runtime check keeps invalid configs from propagating.
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (1)
683-691
: Approve KVCacheManager ctor update: onboard flag has been removed and the argument sequence (enableBlockReuse, cacheType, secondaryOffloadMinPriority, eventManager, enablePartialReuse, copyOnPartialReuse) aligns with the updated constructors.cpp/tests/unit_tests/batch_manager/kvCacheManagerTest.cpp (2)
126-131
: BlockManager ctor updates look consistent.The added maxAttentionWindowVec/temp inputs/dtype/sinkTokenLength arguments match other call sites. No issues spotted.
Also applies to: 211-215, 584-588, 861-865, 1057-1061, 1750-1753
1696-1699
: KVCacheManager constructor signatures verified; no misplaced boolean args found
All KVCacheManager ctor calls align with the updated signature and no residualonboardBlocks
or stray boolean args were detected.cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h (2)
856-866
: BlockManager instantiation updated correctly
KVCacheManager is the sole consumer of BlockManager and now passessinkBubbleLength
beforecacheType
per the revised signature; no callers still pass the removedonboardBlocks
flag.
535-542
: No external WindowBlockManager usages; no constructor call sites to update
Ripgrep across the repository found no invocations of the updated constructor outside its own definition.cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp (5)
535-541
: Call to WindowBlockManager reflects new signatureThe try_emplace argument list aligns with the header reordering and removed onboard flag.
575-581
: WindowBlockManager ctor def — signature matches headerNo functional concerns; constructor order aligns with public declaration.
871-889
: Guarding secondary-pool usage in getFreeBlock is correctThe added check for available secondary blocks prevents invalid offloads during primary reclamation. Good fix.
500-509
: No BlockManager call sites found
Search across the codebase returned no invocations of BlockManager, so there are no call-sites needing inline parameter comments.
959-979
: Guard offloadBlock when no free secondary blocks
Add a check in WindowBlockManager::offloadBlock before calling getFreeBlock(kSecondaryLevel) to avoid underflow when the secondary pool is exhausted:if (mEvictionPolicy->getNumFreeBlocks(kSecondaryLevel) == 0) return; auto offloadBlock = std::get<0>(mEvictionPolicy->getFreeBlock(kSecondaryLevel));[miscategorization: mandatory_refactors_required]
4a1fda3
to
1b9163a
Compare
/bot run --disable-fail-fast |
PR_Github #17338 [ run ] triggered by Bot |
PR_Github #17338 [ run ] completed with state |
1b9163a
to
3dcc00a
Compare
/bot run --disable-fail-fast |
PR_Github #17361 [ run ] triggered by Bot |
PR_Github #17361 [ run ] completed with state |
3dcc00a
to
229b106
Compare
/bot run |
PR_Github #17484 [ run ] triggered by Bot |
PR_Github #17484 [ run ] completed with state |
229b106
to
a9e68ba
Compare
/bot run |
1 similar comment
/bot run |
PR_Github #17540 [ run ] triggered by Bot |
PR_Github #17540 [ run ] completed with state |
…k switch Dead code elimination. The secondary block pool is derived when kv_cache_config::host_cache_size is specified. Whether we onboard/offload a kv cache block can be implicated from whether the manager has secondary block or not. The `onboardBlocks` toggle itself only adds complication. This commit removes it. Signed-off-by: eopXD <[email protected]>
a9e68ba
to
299ca54
Compare
Summary by CodeRabbit
Refactor
Tests
Description
This MR has no functional change intended.
Dead code elimination. The secondary block pool is derived when kv_cache_config::host_cache_size is specified. Whether we onboard/offload a kv cache block can be implicated from whether the manager has secondary block or not. The
onboardBlocks
toggle itself only adds complication. This commit removes it.Test Coverage
Since not functional change is intended. No test change is needed.
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.