Skip to content

Conversation

chuangz0
Copy link
Collaborator

@chuangz0 chuangz0 commented Aug 8, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Stricter compatibility checks for pipeline parallelism and layer counts; configurations with matching pipeline sizes now accept immediately, otherwise divisibility is enforced with clearer warnings.
  • Tests
    • Updated integration tests to use TinyLlama 1.1B Chat instead of Llama 3.1 8B to align test inputs.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@chuangz0 chuangz0 requested a review from Shixiaowei02 August 8, 2025 03:54
@chuangz0 chuangz0 requested a review from a team as a code owner August 8, 2025 03:54
@chuangz0 chuangz0 requested a review from Tabrizian August 8, 2025 03:54
Copy link
Contributor

coderabbitai bot commented Aug 8, 2025

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

inquireSupport in CacheFormatter and MLACacheFormatter now early-returns true when source and destination pipeline parallelism sizes match; otherwise it verifies each config's layer count is divisible by its pipeline parallelism and logs warnings on failures. A disaggregated test was updated to use TinyLlama-1.1B-Chat-v1.0.

Changes

Cohort / File(s) Change Summary
CacheFormatter inquireSupport
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
Added early return when pipeline parallelism sizes match; obtain destPPSize/destNumLayers; improved divisibility warnings; reordered checks.
MLACacheFormatter inquireSupport
cpp/tensorrt_llm/batch_manager/mlaCacheFormatter.cpp
Same logic updates as CacheFormatter: early return on matching PP size, divisibility checks, and enhanced warnings.
Disaggregated test model update
tests/integration/defs/disaggregated/test_disaggregated.py, tests/integration/test_lists/qa/llm_function_full.txt, tests/integration/test_lists/test-db/l0_dgx_h200.yml
Changed test parameter/model path from llama-3.1-8b to TinyLlama-1.1B-Chat-v1.0.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant Formatter as CacheFormatter/MLACacheFormatter
    Caller->>Formatter: inquireSupport(selfConfig, destConfig)
    Formatter->>Formatter: Read selfPPSize, destPPSize, selfNumLayers, destNumLayers
    alt selfPPSize == destPPSize
        Formatter-->>Caller: return true
    else
        alt selfNumLayers % selfPPSize != 0 or destNumLayers % destPPSize != 0
            Formatter->>Formatter: Log warning
            Formatter-->>Caller: return false
        else
            Formatter-->>Caller: return true
        end
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • Shixiaowei02
  • pcastonguay

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5db90ad and 32f1012.

📒 Files selected for processing (2)
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp (1 hunks)
  • cpp/tensorrt_llm/batch_manager/mlaCacheFormatter.cpp (1 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@chuangz0
Copy link
Collaborator Author

chuangz0 commented Aug 8, 2025

/bot run --add-multi-gpu-test

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1cf6694 and 17e5f58.

📒 Files selected for processing (2)
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp (1 hunks)
  • cpp/tensorrt_llm/batch_manager/mlaCacheFormatter.cpp (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{cpp,h,hpp,cc,cxx}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,h,hpp,cc,cxx}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo).
Prefer const or constexpr variables over #defines whenever possible.
A variable that is not modified after its initialization should be declared as const.
Except 0 (used for checking signness/existence/emptiness), nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces in C++ code.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do..while, or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with the first letter lowercase (e.g., thisIsAFilename.cpp), and all files involved in a compilation target must have case-insensitive unique filenames.
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces should use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by 's' (e.g., sMutableStaticGlobal).
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter (e.g., static std::once_flag sFlag;).
Class member variables should use camel case prefixed with 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope, and function-scope magic-number/literal constants should be uppercase snake case with prefix...

Files:

  • cpp/tensorrt_llm/batch_manager/mlaCacheFormatter.cpp
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • cpp/tensorrt_llm/batch_manager/mlaCacheFormatter.cpp
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
🧠 Learnings (1)
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
PR: NVIDIA/TensorRT-LLM#6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/mlaCacheFormatter.cpp
  • cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
cpp/tensorrt_llm/batch_manager/cacheFormatter.cpp (1)

853-856: LGTM! Good optimization for equal pipeline parallelism sizes.

The early return when pipeline parallelism sizes match is a sensible optimization that avoids unnecessary divisibility checks when configurations are already compatible.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14557 [ run ] triggered by Bot

Signed-off-by: Chuang Zhu <[email protected]>
Signed-off-by: Chuang Zhu <[email protected]>
@chuangz0 chuangz0 force-pushed the pp_disagg_cache_fix branch from 0a1a42d to 5db90ad Compare August 8, 2025 09:07
@chuangz0
Copy link
Collaborator Author

chuangz0 commented Aug 8, 2025

@raayandhar if we use llama-3.1-8b, we need to change the model in /tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp4_genpp4.yaml.

we support the case where the number of layers per pp rank may be different when ctxPP=genPP.

@raayandhar
Copy link
Contributor

@raayandhar if we use llama-3.1-8b, we need to change the model in /tests/integration/defs/disaggregated/test_configs/disagg_config_ctxpp4_genpp4.yaml.

we support the case where the number of layers per pp rank may be different when ctxPP=genPP.

Oh, I did not know that, thanks for noticing and making the change.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14557 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10997 completed with status: 'SUCCESS'

Signed-off-by: Chuang Zhu <[email protected]>
@chuangz0
Copy link
Collaborator Author

/bot skip --comment "all test has passed"

@chuangz0 chuangz0 enabled auto-merge (squash) August 11, 2025 02:23
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14728 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14728 [ skip ] completed with state SUCCESS
Skipping testing for commit 32f1012

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants