Skip to content

Conversation

pamelap-nvidia
Copy link
Collaborator

@pamelap-nvidia pamelap-nvidia commented Aug 18, 2025

Summary by CodeRabbit

  • Tests
    • Added runtime guards to skip MOE TRTLLM NVFP4 tests on unsupported GPU SM versions (120/121) across multiple suites and variants.
    • Removed two NVFP4 test entries for a specific Qwen3 30B latency variant; other latency variants remain.
    • These changes only affect test selection and CI test runs; no product behavior or public APIs were changed.

Description

We don't support SM120 with trtllm MOE backend. Skipping test cases.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@pamelap-nvidia pamelap-nvidia requested a review from a team as a code owner August 18, 2025 19:37
Copy link
Contributor

coderabbitai bot commented Aug 18, 2025

Caution

Review failed

The head commit changed during the review from 0565af1 to b857eae.

📝 Walkthrough

Walkthrough

Adds runtime guards to skip NVFP4 tests when the MOE TRTLLM backend runs on GPUs with SM version 120 or 121, removes two Nvfp4 test-list entries, and adds an assert in the fused MoE TRTLLM generator constructor forbidding SM120/121. No public signatures changed.

Changes

Cohort / File(s) Summary
Test runtime skips
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Inserted early checks in multiple NVFP4-related tests to call get_sm_version() and skip when backend is MOE TRTLLM and SM version is 120 or 121; no other logic or signature changes.
QA test-list removals
tests/integration/test_lists/qa/llm_function_rtx6kd.txt
Removed two Nvfp4 test entries for TestQwen3_30B_A3B that referenced latency_moe_trtllm-torch_compile (False and True); other variants unchanged.
Runtime assert in fused MoE generator
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
Added SM-version runtime check in TRTLLMGenFusedMoE.__init__ asserting SM < 120 (error message: "TRTLLMGenFusedMoE does not support SM120 and SM121"); no API/signature changes otherwise.

Sequence Diagram(s)

sequenceDiagram
  participant Runner as Test Runner
  participant Test as NVFP4 Test
  participant Env as Backend/SM Detector

  Runner->>Test: start test
  Test->>Env: get_sm_version(), detect backend
  Env-->>Test: SM value, backend type
  alt backend == MOE TRTLLM and SM == 120 or 121
    Test-->>Runner: skip("MOE TRTLLM backend does not support SM version 120 or 121")
  else
    Test-->>Runner: proceed with test execution
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • hlu1
  • kaiyux
  • chzblych
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (5)

1331-1335: Reduce repetition: cache SM version and use membership test

Minor readability/maintainability improvement: call get_sm_version() once and use an in-check instead of two comparisons.

Apply this diff:

-        if moe_backend == "TRTLLM" and (get_sm_version() == 120
-                                        or get_sm_version() == 121):
-            pytest.skip(
-                "MOE TRTLLM backend does not support SM version 120 or 121")
+        sm = get_sm_version()
+        if moe_backend == "TRTLLM" and sm in (120, 121):
+            pytest.skip("MOE TRTLLM backend does not support SM version 120 or 121")

1383-1387: Same nit: avoid duplicate get_sm_version() calls

Mirror the same simplification here for consistency.

-        if moe_backend == "TRTLLM" and (get_sm_version() == 120
-                                        or get_sm_version() == 121):
-            pytest.skip(
-                "MOE TRTLLM backend does not support SM version 120 or 121")
+        sm = get_sm_version()
+        if moe_backend == "TRTLLM" and sm in (120, 121):
+            pytest.skip("MOE TRTLLM backend does not support SM version 120 or 121")

1601-1605: Same nit: collapse condition and cache SM version

Keeps the condition concise and avoids redundant calls.

-        if moe_backend == "TRTLLM" and (get_sm_version() == 120
-                                        or get_sm_version() == 121):
-            pytest.skip(
-                "MOE TRTLLM backend does not support SM version 120 or 121")
+        sm = get_sm_version()
+        if moe_backend == "TRTLLM" and sm in (120, 121):
+            pytest.skip("MOE TRTLLM backend does not support SM version 120 or 121")

2163-2167: Same nit: use in (120, 121) and a single SM lookup

Consistent small cleanup for readability.

-        if moe_backend == "TRTLLM" and (get_sm_version() == 120
-                                        or get_sm_version() == 121):
-            pytest.skip(
-                "MOE TRTLLM backend does not support SM version 120 or 121")
+        sm = get_sm_version()
+        if moe_backend == "TRTLLM" and sm in (120, 121):
+            pytest.skip("MOE TRTLLM backend does not support SM version 120 or 121")

2288-2292: Same nit: simplify the condition and avoid duplicate calls

Tiny refactor for clarity.

-        if moe_backend == "TRTLLM" and (get_sm_version() == 120
-                                        or get_sm_version() == 121):
-            pytest.skip(
-                "MOE TRTLLM backend does not support SM version 120 or 121")
+        sm = get_sm_version()
+        if moe_backend == "TRTLLM" and sm in (120, 121):
+            pytest.skip("MOE TRTLLM backend does not support SM version 120 or 121")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6fda8dd and 7e4022a.

📒 Files selected for processing (1)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@pamelap-nvidia
Copy link
Collaborator Author

/bot skip --comment "test only"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15660 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15660 [ skip ] completed with state SUCCESS
Skipping testing for commit 8b4adfa

Copy link
Collaborator

@schetlur-nv schetlur-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pamelap-nvidia Do we get a clear error message from TRTLLM MoE backend if it is run on these architectures?
As long as that is the case, just skipping the test is fine.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8b4adfa and 66b9f22.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py

71-71: Undefined name get_sm_version

(F821)

@pamelap-nvidia
Copy link
Collaborator Author

/bot run

@pamelap-nvidia
Copy link
Collaborator Author

@pamelap-nvidia Do we get a clear error message from TRTLLM MoE backend if it is run on these architectures? As long as that is the case, just skipping the test is fine.

I added an explicit check in the trtllm MoE module. Previous error message was from lower level checks.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15931 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15931 [ run ] completed with state FAILURE
/LLM/release-1.0/L0_MergeRequest_PR pipeline #238 completed with status: 'FAILURE'

@pamelap-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15934 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15934 [ run ] completed with state FAILURE
/LLM/release-1.0/L0_MergeRequest_PR pipeline #239 completed with status: 'FAILURE'

@pamelap-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15947 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15947 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #240 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@pamelap-nvidia pamelap-nvidia merged commit 1e5a6be into NVIDIA:release/1.0 Aug 21, 2025
4 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants