Skip to content

Conversation

tongyuantongyu
Copy link
Member

@tongyuantongyu tongyuantongyu commented Aug 27, 2025

Add parallel config for more pytorch unittest suites.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Documentation
    • Updated an internal comment to reference the correct test location; no functional changes.
  • Tests
    • Split a single THOP test entry into separate parallel and serial variants across multiple test lists and updated test duration mappings accordingly.
  • Impact
    • No user-facing changes or API modifications; no action required.

Copy link
Contributor

coderabbitai bot commented Aug 27, 2025

📝 Walkthrough

Walkthrough

Updated a single comment in a CUDA header to reference the parallel THOP test path; split the combined THOP test entry into separate parallel and serial entries across two integration test lists; updated test duration mappings to include separate durations for parallel and serial THOP. No functional code or public interfaces changed.

Changes

Cohort / File(s) Summary
Comment update
cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh
Single-line comment changed to reference tests/unittest/_torch/thop/parallel/deep_gemm_tests.py. No code or logic changes.
Test duration mapping
tests/integration/defs/.test_durations
Replaced test_unittests_v2[unittest/_torch/thop] with two entries: test_unittests_v2[unittest/_torch/thop/parallel] (311.58) and test_unittests_v2[unittest/_torch/thop/serial] (18.96). Removed the previous combined entry.
Integration test lists
tests/integration/test_lists/test-db/l0_b200.yml, tests/integration/test_lists/test-db/l0_h100.yml
Replaced unittest/_torch/thop with two paths: unittest/_torch/thop/parallel and unittest/_torch/thop/serial. No other edits.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • QiJune

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 82d710f and 89821d2.

⛔ Files ignored due to path filters (1)
  • tests/integration/defs/agg_unit_mem_df.csv is excluded by !**/*.csv
📒 Files selected for processing (4)
  • cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh (1 hunks)
  • tests/integration/defs/.test_durations (1 hunks)
  • tests/integration/test_lists/test-db/l0_b200.yml (1 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/integration/test_lists/test-db/l0_h100.yml
  • tests/integration/test_lists/test-db/l0_b200.yml
  • tests/integration/defs/.test_durations
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbit in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbit in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbit gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbit read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbit help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbit ignore or @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbit summary or @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbit or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tongyuantongyu
Copy link
Member Author

/bot run --gpu-type "H100_PCIe,B200_PCIe" --test-backend "pytorch"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16680 [ run ] triggered by Bot

Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16680 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12520 (Partly Tested) completed with status: 'FAILURE'

@tongyuantongyu tongyuantongyu force-pushed the ytong/unittest_parallel_update2 branch from 787da67 to de54920 Compare August 28, 2025 02:52
@tongyuantongyu
Copy link
Member Author

/bot run --gpu-type "H100_PCIe,B200_PCIe" --test-backend "pytorch"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16787 [ run ] triggered by Bot

@tongyuantongyu tongyuantongyu force-pushed the ytong/unittest_parallel_update2 branch from de54920 to 564443f Compare August 28, 2025 07:25
@tongyuantongyu tongyuantongyu requested a review from a team as a code owner August 28, 2025 07:25
@tongyuantongyu
Copy link
Member Author

/bot run --gpu-type "H100_PCIe,B200_PCIe" --test-backend "pytorch"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh (2)

384-389: Minor robustness: cast index to offset width before arithmetic to avoid signed/width surprises.

Make the promotion explicit; keeps device/host behavior consistent across T_index/T_offset combos.

Apply this diff:

 template <typename T_offset, typename T_index>
 __host__ __device__ __forceinline__ T_offset compute_padded_offset(T_offset offset, T_index problem_idx)
 {
   // This formulation ensures that padded_offset[i + 1] - padded_offset[i] >= offset[i + 1] - offset[i].
   constexpr T_offset alignment = 32;
-  return (offset + problem_idx * (alignment - 1)) / alignment * alignment;
+  auto addend = static_cast<T_offset>(problem_idx) * (alignment - 1);
+  return (offset + addend) / alignment * alignment;
 }

382-382: Update comment to reference correct test file without hard-coding full path

The existing comment points to a non-existent paradeep_gemm_tests.py; reference the actual THOP test and note synchronization.

--- a/cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh
+++ b/cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh
@@ -382,1 +382,2 @@
-// Need to keep the same as the one in tests/unittest/_torch/thop/paradeep_gemm_tests.py
+// Keep in sync with the THOP reference test (deep_gemm_tests.py) under tests/unittest/_torch/thop/parallel.
+// If the padding logic here changes, update compute_padded_offset in that test accordingly.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 4541655 and 564443f.

⛔ Files ignored due to path filters (1)
  • tests/integration/defs/agg_unit_mem_df.csv is excluded by !**/*.csv
📒 Files selected for processing (1)
  • cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
cpp/include/tensorrt_llm/deep_gemm/scheduler.cuh (1)

1-25: Clarify dual SPDX headers (MIT + Apache-2.0) to avoid licensing ambiguity.

This file carries both DeepSeek MIT and NVIDIA Apache-2.0 SPDX blocks. Confirm intended licensing (dual-licensed vs. single) and align with repo policy to prevent compliance issues.

Would you like a follow-up PR to standardize headers across deep_gemm?

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16815 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16787 [ run ] completed with state ABORTED
/LLM/main/L0_MergeRequest_PR pipeline #12602 (Partly Tested) completed with status: 'FAILURE'

@tongyuantongyu
Copy link
Member Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16819 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16815 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16819 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit 564443f

@tongyuantongyu tongyuantongyu force-pushed the ytong/unittest_parallel_update2 branch from 564443f to 82d710f Compare August 28, 2025 08:14
@tongyuantongyu
Copy link
Member Author

/bot run --gpu-type "H100_PCIe,B200_PCIe" --test-backend "pytorch"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16827 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16827 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12632 (Partly Tested) completed with status: 'FAILURE'

@tongyuantongyu tongyuantongyu force-pushed the ytong/unittest_parallel_update2 branch from 82d710f to 89821d2 Compare August 28, 2025 14:15
@tongyuantongyu
Copy link
Member Author

/bot run --gpu-type "H100_PCIe,B200_PCIe" --test-backend "pytorch"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16870 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16870 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12668 (Partly Tested) completed with status: 'SUCCESS'

@QiJune
Copy link
Collaborator

QiJune commented Aug 29, 2025

/bot skip --comment "B200 and H100 pipelines passed“

@QiJune QiJune enabled auto-merge (squash) August 29, 2025 00:51
@litaotju litaotju disabled auto-merge August 29, 2025 01:27
@litaotju litaotju merged commit ccb800f into NVIDIA:main Aug 29, 2025
6 of 7 checks passed
@tongyuantongyu tongyuantongyu deleted the ytong/unittest_parallel_update2 branch August 29, 2025 05:28
chang-l pushed a commit to chang-l/TensorRT-LLM that referenced this pull request Sep 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants