Skip to content

Conversation

mikeiovine
Copy link
Collaborator

@mikeiovine mikeiovine commented Aug 25, 2025

Description

Fix llama 4 FP8 with pipeline parallelism. The AR/all reduce fusion is not being handled correctly when pipeline parallelism is enabled. Just disable the fusion when PP>1 to fix it.

Test Coverage

Added new PP=2, TP=4 Maverick accuracy test.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Bug Fixes

    • Disabled MLP/MoE fusion when pipeline parallelism is enabled and aligned post-fusion flags with pre-fusion, improving stability and consistency for PP>1 in Llama decoder runs.
  • Tests

    • Expanded FP8 test coverage with a new configuration (tensor parallel=4, pipeline parallel=2), increasing confidence in multi-parallel setups.

@mikeiovine mikeiovine requested a review from yuxianq August 25, 2025 15:54
@mikeiovine mikeiovine requested review from a team as code owners August 25, 2025 15:54
Copy link
Contributor

coderabbitai bot commented Aug 25, 2025

📝 Walkthrough

Walkthrough

Adds pipeline-parallel gating to fusion toggles in Llama4DecoderLayer so fusion is disabled when PP>0; post-MLP/MOE fusion flags now mirror pre-fusion flags. Adds a TODO about PP>1 fusion. Expands FP8 integration tests by adding a tp=4, pp=2, ep=1 configuration.

Changes

Cohort / File(s) Summary of edits
Decoder fusion gating
tensorrt_llm/_torch/models/modeling_llama.py
Added not model_config.mapping.has_pp() to PRE_MLP_FUSION and PRE_MOE_FUSION conditions; POST_MLP_FUSION and POST_MOE_FUSION now mirror their PRE counterparts; added TODO (TRTLLM-7809) regarding fusion with PP>1.
FP8 test matrix expansion
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Extended parameterization for test_fp8 with new tuple (4, 2, 1) and id tp4pp2; no logic changes to the test body.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant C as ModelConfig
  participant L as Llama4DecoderLayer.__init__
  participant F as FusionFlags

  C->>L: provide mapping (TP/PP/DP)
  L->>F: compute PRE_MLP_FUSION = TP-cond && DP/fusion-cond && !PP
  L->>F: compute PRE_MOE_FUSION = TP-cond && DP/fusion-cond && !PP
  L->>F: set POST_MLP_FUSION = PRE_MLP_FUSION
  L->>F: set POST_MOE_FUSION = PRE_MOE_FUSION
  Note over L,F: Fusion disabled when pipeline parallelism is present (PP>0)
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks (1 passed, 2 warnings)

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The current description provides a brief explanation of the issue and the added test coverage but omits the required @coderabbitai summary header from the repository template and entirely lacks the PR Checklist section that the template mandates. Please prepend the @coderabbitai summary annotation at the top of the description and add a “## PR Checklist” section with the standard checklist items (e.g., coding guidelines, test updates, documentation verification) as specified by the template.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly identifies the NVBugs ID, the fix type, and summarizes the main change—addressing FP8 behavior for Llama 4 when pipeline parallelism exceeds one—making it concise, specific, and directly related to the pull request’s core purpose.

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@mikeiovine
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16459 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

596-599: Optional: pre-skip by world size to reduce runtime skips.

You can avoid runtime skipping and reduce CI noise by tagging only this PP=2 case with a world-size mark. Example:

 @pytest.mark.parametrize(
-    "tp_size,pp_size,ep_size", [(8, 1, 1), (8, 1, 4), (8, 1, 8), (4, 1, 1),
-                                (4, 1, 2), (4, 1, 4), (4, 2, 1)],
-    ids=["tp8", "tp8ep4", "tp8ep8", "tp4", "tp4ep2", "tp4ep4", "tp4pp2"])
+    "tp_size,pp_size,ep_size",
+    [(8, 1, 1), (8, 1, 4), (8, 1, 8), (4, 1, 1), (4, 1, 2), (4, 1, 4),
+     pytest.param(4, 2, 1, marks=pytest.mark.skip_less_mpi_world_size(8))],
+    ids=["tp8", "tp8ep4", "tp8ep8", "tp4", "tp4ep2", "tp4ep4", "tp4pp2"])

Would you like me to apply this across other PP>1 parametrizations for consistency?

tensorrt_llm/_torch/models/modeling_llama.py (2)

437-442: Clarify that MoE fusion flags now govern both MLP and MoE fusion sites.

Since the pre/post fusion call-sites check (PRE_MLP_FUSION or PRE_MOE_FUSION) and (POST_MLP_FUSION or POST_MOE_FUSION), and explicit MLP flag assignments were removed, *_MOE_FUSION effectively drives fusion for both paths. Add a brief comment to prevent future confusion.

 self.fusion_config.PRE_MOE_FUSION = model_config.mapping.has_tp(
 ) and not self.enable_attention_dp and self.enable_fusion and not model_config.mapping.has_pp(
 )
 self.fusion_config.POST_MOE_FUSION = model_config.mapping.has_tp(
 ) and not self.enable_attention_dp and self.enable_fusion and not model_config.mapping.has_pp(
 )
+# Note: PRE/POST_MLP_FUSION are intentionally not set here.
+#       PRE/POST_MOE_FUSION act as the unified gating for the pre/post feed-forward
+#       fused all-reduce (applies to both MLP and MoE layers), while the specific
+#       fusion op (QUANT vs. RMS_NORM) is selected above based on is_mlp_layer.

503-517: Optional: simplify to a single pre-feed-forward fusion gate.

Given PRE_MOE_FUSION is now the effective master switch, consider consolidating to a single PRE_FF_FUSION flag to reduce branching and name ambiguity at call sites:

-        if self.fusion_config.PRE_MLP_FUSION or self.fusion_config.PRE_MOE_FUSION:
+        if self.fusion_config.PRE_MOE_FUSION:  # unified pre-FF fusion gate

Same applies for the post path below. This keeps behavior identical while reducing surface area for future regressions.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6a44e5b and f600ab1.

📒 Files selected for processing (2)
  • tensorrt_llm/_torch/models/modeling_llama.py (1 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs
Preserve module namespace when importing: from package.subpackage import foo; then use foo.SomeClass()
Python filenames use snake_case (e.g., some_file.py)
Class names use PascalCase
Function and method names use snake_case
Local variables use snake_case; prefix k for names starting with a number (e.g., k_99th_percentile)
Global variables are UPPER_SNAKE_CASE prefixed with G (e.g., G_MY_GLOBAL)
Constants are UPPER_SNAKE_CASE
Avoid shadowing variables from an outer scope
Initialize all externally visible members of a class in init
For interfaces used outside a file, prefer docstrings over comments; comments for internal code or local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Attributes and variables can be documented inline with trailing docstrings under the class or module
Avoid using reflection when easily avoidable; prefer explicit parameters/constructs over dict(**locals())
In try/except, catch the narrowest exception types possible
For duck-typing try/except, keep try body minimal and place logic in else after attribute existence checks

Files:

  • tensorrt_llm/_torch/models/modeling_llama.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{h,hpp,hxx,hh,c,cc,cpp,cxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA Apache-2.0 copyright header with current year to all source files

Files:

  • tensorrt_llm/_torch/models/modeling_llama.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (2)
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_llama.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/models/modeling_llama.py (1)
tensorrt_llm/mapping.py (2)
  • has_tp (423-424)
  • has_pp (435-436)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

596-599: PP>1 FP8 Maverick case covered — good addition.

Adding (tp_size, pp_size, ep_size) = (4, 2, 1) with id tp4pp2 directly targets the reported PP>1 FP8 issue on Maverick and will exercise the fix path. The memory/device-count guards below also look sufficient to avoid spurious CI failures.

tensorrt_llm/_torch/models/modeling_llama.py (2)

437-442: Correct: disable pre/post fused AR when PP>1.

Gating PRE_MOE_FUSION/POST_MOE_FUSION with not model_config.mapping.has_pp() is the right fix to avoid fused all-reduce across pipeline boundaries. This aligns with the PR objective for FP8 with PP>1, while preserving fusion when PP==1.


460-467: Defaults Verified: No Updates Required

I’ve confirmed that in tensorrt_llm/_torch/models/modeling_utils.py, EagerFusionConfig explicitly initializes PRE_MLP_FUSION and POST_MLP_FUSION (as well as the MOE flags) to False, so there’s no risk of unintended fusion when PP>1. All fusion-related flags are safely defaulted off.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16459 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12366 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16566 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16566 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12441 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run

1 similar comment
@mikeiovine
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16729 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16729 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #12557 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16877 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16877 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12674 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17012 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17012 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12775 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18384 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18384 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13791 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18659 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18659 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14016 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19029 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19029 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14272 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19204 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19204 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14416 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19379 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19379 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14555 completed with status: 'ABORTED'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19611 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19611 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14750 completed with status: 'ABORTED'

@mikeiovine
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19711 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19711 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14835 completed with status: 'SUCCESS'

@mikeiovine mikeiovine merged commit 42c2ec3 into NVIDIA:main Sep 24, 2025
5 checks passed
@mikeiovine mikeiovine deleted the fix-llama4-pp branch September 24, 2025 16:16
mikeiovine added a commit to mikeiovine/TensorRT-LLM that referenced this pull request Sep 24, 2025
mikeiovine added a commit to mikeiovine/TensorRT-LLM that referenced this pull request Sep 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants