Skip to content

Conversation

pengbowang-nv
Copy link
Collaborator

@pengbowang-nv pengbowang-nv commented Aug 13, 2025

Summary by CodeRabbit

  • New Features

    • None
  • Bug Fixes

    • Adjusted AllReduce backend initialization so the MNNVL backend is used only when explicitly selected; AUTO strategy will no longer instantiate MNNVL while retaining the same runtime fallback behavior. No public API changes.
  • Tests

    • Re-enabled two DeepSeek R1 multi-GPU latency tests by removing waivers:
      • test_nvfp4_multi_gpus[latency]
      • test_nvfp4_multi_gpus[latency_trtllmgen]

Description

Fix Deepseek R1 hang issue. The hang was caused by MNNVL Buffer initialization process.

Test Coverage

accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_nvfp4_multi_gpus[latency]
accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_nvfp4_multi_gpus[latency_trtllmgen]

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@pengbowang-nv pengbowang-nv requested a review from a team as a code owner August 13, 2025 09:02
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

📝 Walkthrough

Walkthrough

Narrowed MNNVL AllReduce initialization to only the explicit MNNVL strategy and left forward-path fallback logic unchanged. Removed two DeepSeekR1 latency waivers from tests/integration/test_lists/waives.txt.

Changes

Cohort / File(s) Summary
Distributed AllReduce init
tensorrt_llm/_torch/distributed/ops.py
Instantiate MNNVLAllReduce only when strategy == MNNVL (not AUTO). Forward: attempt MNNVL if available; if it returns None or errors, fall back to regular AllReduce and set strategy back to AUTO if original was MNNVL. No public API changes.
Integration waivers
tests/integration/test_lists/waives.txt
Removed two SKIP entries for DeepSeekR1 nvfp4 multi-GPU latency tests (standard and trtllmgen).

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant AllReduceOp
    participant MNNVL
    participant DefaultAR

    Caller->>AllReduceOp: forward(tensor)
    alt MNNVL instance available
        AllReduceOp->>MNNVL: allreduce(tensor)
        alt MNNVL returns result
            MNNVL-->>AllReduceOp: result
            AllReduceOp-->>Caller: result
        else MNNVL returns None/error
            AllReduceOp->>DefaultAR: allreduce(tensor) + set strategy=AUTO if was MNNVL
            DefaultAR-->>AllReduceOp: result
            AllReduceOp-->>Caller: result
        end
    else No MNNVL instance (AUTO init no longer creates)
        AllReduceOp->>DefaultAR: allreduce(tensor)
        DefaultAR-->>AllReduceOp: result
        AllReduceOp-->>Caller: result
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • brb-nv
  • yiqingy0
  • chzblych
  • yilin-void
  • kaiyux

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6ed8cfe and ad9be7e.

📒 Files selected for processing (1)
  • tests/integration/test_lists/waives.txt (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/waives.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@pengbowang-nv
Copy link
Collaborator Author

/bot run

@litaotju litaotju requested a review from timlee0212 August 13, 2025 09:06
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tensorrt_llm/_torch/distributed/ops.py (3)

458-476: Log a clear reason when dtype is None and avoid noisy multi-rank logs.

If users request MNNVL but forget to pass dtype, the current debug message blames the is_mnnvl check. Emit an explicit warning for dtype=None and only log once (rank 0) to reduce noise.

Apply this diff:

@@
-            if self.strategy == AllReduceStrategy.MNNVL:
-                if MNNVLAllReduce.is_mnnvl(self.mapping, dtype):
-                    try:
-                        self.mnnvl_allreduce = MNNVLAllReduce(
-                            self.mapping, dtype) if dtype else None
-                        if self.mnnvl_allreduce:
-                            logger.debug(f"MNNVLAllReduce is enabled")
-                        else:
-                            logger.debug(f"MNNVLAllReduce is disabled")
-                    except Exception as e:
-                        logger.debug(
-                            f"MNNVL AllReduce can't be enabled due to {e}.")
-                        self.mnnvl_allreduce = None
-                else:
-                    logger.debug(
-                        f"MNNVLAllReduce can't be enabled due to failing the is_mnnvl check."
-                    )
-                    self.mnnvl_allreduce = None
+            if self.strategy == AllReduceStrategy.MNNVL:
+                if dtype is None:
+                    if getattr(self.mapping, "tp_rank", 0) == 0:
+                        logger.warning("MNNVL strategy requested but dtype is None; disabling MNNVL.")
+                    self.mnnvl_allreduce = None
+                elif MNNVLAllReduce.is_mnnvl(self.mapping, dtype):
+                    try:
+                        self.mnnvl_allreduce = MNNVLAllReduce(self.mapping, dtype)
+                        if getattr(self.mapping, "tp_rank", 0) == 0:
+                            logger.debug("MNNVLAllReduce is enabled")
+                    except Exception as e:
+                        if getattr(self.mapping, "tp_rank", 0) == 0:
+                            logger.debug(f"MNNVL AllReduce can't be enabled due to {e}.")
+                        self.mnnvl_allreduce = None
+                else:
+                    if getattr(self.mapping, "tp_rank", 0) == 0:
+                        logger.debug("MNNVLAllReduce can't be enabled due to failing the is_mnnvl check.")
+                    self.mnnvl_allreduce = None

409-444: Docstring omission: include MNNVL strategy and its new init behavior.

The constructor docs enumerate strategies but omit MNNVL. Add it and clarify it’s only instantiated when explicitly selected.

Apply this doc update:

@@
-                - LOWPRECISION: AllReduce quantizes data to lower precision for transmission.
+                - LOWPRECISION: AllReduce quantizes data to lower precision for transmission.
                   Should only be used on topologies with PCIe switches and without NVLink.
                   This strategy may result in some precision loss but can improve performance
                   on specific hardware configurations.
+
+                - MNNVL: Use Multi-Node NVLink-optimized AllReduce for supported platforms and dtypes.
+                  Note: MNNVL kernels are only initialized when strategy is explicitly set to MNNVL.
+                  AUTO will not instantiate or attempt MNNVL during initialization.

1-1: Missing NVIDIA copyright header.

Per repository guidelines, prepend the current year NVIDIA copyright header.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc5f766 and 6ed8cfe.

📒 Files selected for processing (2)
  • tensorrt_llm/_torch/distributed/ops.py (1 hunks)
  • tests/integration/test_lists/waives.txt (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/waives.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/_torch/distributed/ops.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/distributed/ops.py
🔇 Additional comments (2)
tensorrt_llm/_torch/distributed/ops.py (2)

458-476: Explicitly gating MNNVL init behind strategy==MNNVL is correct and aligns with the PR goal.

This prevents accidental MNNVL buffer initialization in AUTO mode and directly addresses the DeepSeek R1 hang.


526-528: Fallback guard for MNNVL is correctly implemented

The check in tensorrt_llm/_torch/distributed/ops.py (lines 526–528) remaps MNNVL to AUTO before invoking torch.ops.trtllm.allreduce, so we never pass an unsupported strategy to the custom op. A code-wide search confirms this is the only path where MNNVL is converted to AUTO for the torch operator. No further action needed.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15106 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15106 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11404 completed with status: 'FAILURE'

@pengbowang-nv
Copy link
Collaborator Author

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15120 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15120 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11416 completed with status: 'FAILURE'

@litaotju
Copy link
Collaborator

/bot run --add-multi-gpu-test --disable-fail-fast

1 similar comment
@litaotju
Copy link
Collaborator

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15197 [ run ] triggered by Bot

@pengbowang-nv
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15276 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15276 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit ad9be7e

@pengbowang-nv
Copy link
Collaborator Author

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15278 [ run ] triggered by Bot

@litaotju
Copy link
Collaborator

The A10-PyTorch-1/disaggregated/test_disaggregated.py::test_disaggregated_diff_max_tokens[TinyLlama-1.1B-Chat-v1.0] is unstable, already failed in the post merge https://prod.blsm.nvidia.com/sw-tensorrt-top-1/job/LLM/job/main/job/L0_PostMerge/2232/

Not related.

@litaotju
Copy link
Collaborator

I am forcing merge it. Since all the failed cases are already failed in post merge, and not related to this one.

@litaotju litaotju merged commit ffc976c into NVIDIA:main Aug 14, 2025
4 of 5 checks passed
@tensorrt-cicd
Copy link
Collaborator

PR_Github #15278 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11535 completed with status: 'FAILURE'

@pengbowang-nv
Copy link
Collaborator Author

note that this PR has passed single and multi gpu run of https://nv/trt-llm-cicd/job/helpers/job/PR_Github/15197/ despite not being reported due to ci issue.

dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
…nvl by default (NVIDIA#6860)

Signed-off-by: Pengbo Wang <[email protected]>
Co-authored-by: Tao Li @ NVIDIA <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
@pengbowang-nv pengbowang-nv deleted the fix-deepseek-r1-hang branch September 2, 2025 09:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants