-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[https://nvbugs/5473781][fix] Fix llama 4 FP8 for PP>1 #7220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds pipeline-parallel gating to fusion toggles in Llama4DecoderLayer so fusion is disabled when PP>0; post-MLP/MOE fusion flags now mirror pre-fusion flags. Adds a TODO about PP>1 fusion. Expands FP8 integration tests by adding a tp=4, pp=2, ep=1 configuration. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as ModelConfig
participant L as Llama4DecoderLayer.__init__
participant F as FusionFlags
C->>L: provide mapping (TP/PP/DP)
L->>F: compute PRE_MLP_FUSION = TP-cond && DP/fusion-cond && !PP
L->>F: compute PRE_MOE_FUSION = TP-cond && DP/fusion-cond && !PP
L->>F: set POST_MLP_FUSION = PRE_MLP_FUSION
L->>F: set POST_MOE_FUSION = PRE_MOE_FUSION
Note over L,F: Fusion disabled when pipeline parallelism is present (PP>0)
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks (1 passed, 2 warnings)❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
/bot run |
PR_Github #16459 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)
596-599
: Optional: pre-skip by world size to reduce runtime skips.You can avoid runtime skipping and reduce CI noise by tagging only this PP=2 case with a world-size mark. Example:
@pytest.mark.parametrize( - "tp_size,pp_size,ep_size", [(8, 1, 1), (8, 1, 4), (8, 1, 8), (4, 1, 1), - (4, 1, 2), (4, 1, 4), (4, 2, 1)], - ids=["tp8", "tp8ep4", "tp8ep8", "tp4", "tp4ep2", "tp4ep4", "tp4pp2"]) + "tp_size,pp_size,ep_size", + [(8, 1, 1), (8, 1, 4), (8, 1, 8), (4, 1, 1), (4, 1, 2), (4, 1, 4), + pytest.param(4, 2, 1, marks=pytest.mark.skip_less_mpi_world_size(8))], + ids=["tp8", "tp8ep4", "tp8ep8", "tp4", "tp4ep2", "tp4ep4", "tp4pp2"])Would you like me to apply this across other PP>1 parametrizations for consistency?
tensorrt_llm/_torch/models/modeling_llama.py (2)
437-442
: Clarify that MoE fusion flags now govern both MLP and MoE fusion sites.Since the pre/post fusion call-sites check
(PRE_MLP_FUSION or PRE_MOE_FUSION)
and(POST_MLP_FUSION or POST_MOE_FUSION)
, and explicit MLP flag assignments were removed,*_MOE_FUSION
effectively drives fusion for both paths. Add a brief comment to prevent future confusion.self.fusion_config.PRE_MOE_FUSION = model_config.mapping.has_tp( ) and not self.enable_attention_dp and self.enable_fusion and not model_config.mapping.has_pp( ) self.fusion_config.POST_MOE_FUSION = model_config.mapping.has_tp( ) and not self.enable_attention_dp and self.enable_fusion and not model_config.mapping.has_pp( ) +# Note: PRE/POST_MLP_FUSION are intentionally not set here. +# PRE/POST_MOE_FUSION act as the unified gating for the pre/post feed-forward +# fused all-reduce (applies to both MLP and MoE layers), while the specific +# fusion op (QUANT vs. RMS_NORM) is selected above based on is_mlp_layer.
503-517
: Optional: simplify to a single pre-feed-forward fusion gate.Given
PRE_MOE_FUSION
is now the effective master switch, consider consolidating to a singlePRE_FF_FUSION
flag to reduce branching and name ambiguity at call sites:- if self.fusion_config.PRE_MLP_FUSION or self.fusion_config.PRE_MOE_FUSION: + if self.fusion_config.PRE_MOE_FUSION: # unified pre-FF fusion gateSame applies for the post path below. This keeps behavior identical while reducing surface area for future regressions.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
tensorrt_llm/_torch/models/modeling_llama.py
(1 hunks)tests/integration/defs/accuracy/test_llm_api_pytorch.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs
Preserve module namespace when importing: from package.subpackage import foo; then use foo.SomeClass()
Python filenames use snake_case (e.g., some_file.py)
Class names use PascalCase
Function and method names use snake_case
Local variables use snake_case; prefix k for names starting with a number (e.g., k_99th_percentile)
Global variables are UPPER_SNAKE_CASE prefixed with G (e.g., G_MY_GLOBAL)
Constants are UPPER_SNAKE_CASE
Avoid shadowing variables from an outer scope
Initialize all externally visible members of a class in init
For interfaces used outside a file, prefer docstrings over comments; comments for internal code or local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Attributes and variables can be documented inline with trailing docstrings under the class or module
Avoid using reflection when easily avoidable; prefer explicit parameters/constructs over dict(**locals())
In try/except, catch the narrowest exception types possible
For duck-typing try/except, keep try body minimal and place logic in else after attribute existence checks
Files:
tensorrt_llm/_torch/models/modeling_llama.py
tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{h,hpp,hxx,hh,c,cc,cpp,cxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA Apache-2.0 copyright header with current year to all source files
Files:
tensorrt_llm/_torch/models/modeling_llama.py
tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (2)
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/models/modeling_llama.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/models/modeling_llama.py (1)
tensorrt_llm/mapping.py (2)
has_tp
(423-424)has_pp
(435-436)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)
596-599
: PP>1 FP8 Maverick case covered — good addition.Adding
(tp_size, pp_size, ep_size) = (4, 2, 1)
with idtp4pp2
directly targets the reported PP>1 FP8 issue on Maverick and will exercise the fix path. The memory/device-count guards below also look sufficient to avoid spurious CI failures.tensorrt_llm/_torch/models/modeling_llama.py (2)
437-442
: Correct: disable pre/post fused AR when PP>1.Gating
PRE_MOE_FUSION
/POST_MOE_FUSION
withnot model_config.mapping.has_pp()
is the right fix to avoid fused all-reduce across pipeline boundaries. This aligns with the PR objective for FP8 with PP>1, while preserving fusion when PP==1.
460-467
: Defaults Verified: No Updates RequiredI’ve confirmed that in
tensorrt_llm/_torch/models/modeling_utils.py
,EagerFusionConfig
explicitly initializesPRE_MLP_FUSION
andPOST_MLP_FUSION
(as well as the MOE flags) toFalse
, so there’s no risk of unintended fusion whenPP>1
. All fusion-related flags are safely defaulted off.
PR_Github #16459 [ run ] completed with state |
/bot run |
PR_Github #16566 [ run ] triggered by Bot |
PR_Github #16566 [ run ] completed with state |
/bot run |
1 similar comment
/bot run |
PR_Github #16729 [ run ] triggered by Bot |
PR_Github #16729 [ run ] completed with state |
3f986ac
to
da5d44b
Compare
/bot run |
PR_Github #16877 [ run ] triggered by Bot |
PR_Github #16877 [ run ] completed with state |
Signed-off-by: Mike Iovine <[email protected]>
da5d44b
to
dbab58d
Compare
/bot run |
PR_Github #17012 [ run ] triggered by Bot |
PR_Github #17012 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #18384 [ run ] triggered by Bot |
PR_Github #18384 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #18659 [ run ] triggered by Bot |
PR_Github #18659 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #19029 [ run ] triggered by Bot |
PR_Github #19029 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #19204 [ run ] triggered by Bot |
PR_Github #19204 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #19379 [ run ] triggered by Bot |
PR_Github #19379 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #19611 [ run ] triggered by Bot |
PR_Github #19611 [ run ] completed with state |
/bot run --disable-fail-fast |
PR_Github #19711 [ run ] triggered by Bot |
PR_Github #19711 [ run ] completed with state |
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Description
Fix llama 4 FP8 with pipeline parallelism. The AR/all reduce fusion is not being handled correctly when pipeline parallelism is enabled. Just disable the fusion when PP>1 to fix it.
Test Coverage
Added new PP=2, TP=4 Maverick accuracy test.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
Summary by CodeRabbit
Bug Fixes
Tests