-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[None][ci] Correct docker args for GPU devices and remove some stale CI codes #7417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
83568c5
to
96d75d8
Compare
/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1, GB200-PyTorch-1, GB200-PyTorch-2, GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast |
PR_Github #17158 [ run ] triggered by Bot |
3b45415
to
342fb7b
Compare
/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1, GB200-PyTorch-1, GB200-PyTorch-2, GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast |
PR_Github #17196 [ run ] triggered by Bot |
PR_Github #17158 [ run ] completed with state |
342fb7b
to
0c43f83
Compare
/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1, DGX_B200-4_GPUs-PyTorch-1, GB200-PyTorch-1, GB200-PyTorch-2, GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast |
PR_Github #17203 [ run ] triggered by Bot |
PR_Github #17196 [ run ] completed with state |
PR_Github #17203 [ run ] completed with state |
0c43f83
to
d7fcc95
Compare
/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1, DGX_B200-4_GPUs-PyTorch-1, GB200-PyTorch-1, GB200-PyTorch-2, GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast |
PR_Github #17233 [ run ] triggered by Bot |
PR_Github #17233 [ run ] completed with state |
d7fcc95
to
69e0679
Compare
/bot run --stage-list "DGX_B200-8_GPUs-PyTorch-1, DGX_B200-4_GPUs-PyTorch-1, GB200-PyTorch-1, GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast |
PR_Github #17262 [ run ] triggered by Bot |
📝 WalkthroughWalkthroughThe Jenkins test orchestration removes Docker-on-node execution in favor of Kubernetes-only pods, simplifies the launchTestJobs API, revises GPU argument derivation and logging, augments hardware introspection, hardens Slurm output handling, and adds targeted Docker error filtering. The Slurm runner expands GPU diagnostics by chaining multiple nvidia-smi commands. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as Developer
participant J as Jenkins Pipeline
participant G as L0_Test.groovy
participant K as Kubernetes Pod
participant S as Slurm (optional)
participant D as Docker (inside pod)
Dev->>J: Trigger L0 pipeline
J->>G: launchTestJobs(pipeline, testFilter)
Note right of G: GPU args derived from NV_GPU/CUDA_VISIBLE_DEVICES<br/>Compute dockerArgs and log final args
G->>K: runInKubernetes(..., test runners)
activate K
K->>D: Start container with dockerArgs
Note over D: On start: nproc, free -g, hostname,<br/>nvidia-smi, nvidia-smi -q, topo -m
alt Slurm-based tests
D->>S: srun ... (per test)
Note over S: slurm output filename %j -> actual JobID
end
D-->>K: Test results / logs
deactivate K
K-->>G: Completion status
G-->>J: Aggregate results
J-->>Dev: Pipeline status
rect rgba(230,245,255,0.6)
Note over G,K: New: Kubernetes-only execution path
end
rect rgba(245,235,255,0.45)
Note over D: New: Expanded GPU diagnostics and selective Docker error handling
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
jenkins/scripts/slurm_run.sh (1)
25-25
: Don’t fail the whole run on optional GPU topology query.
nvidia-smi topo -m
can fail on some stacks; chaining with&&
aborts the job prematurely. Make each probe non-fatal.- nvidia-smi && nvidia-smi -q && nvidia-smi topo -m + { nvidia-smi || true; } && { nvidia-smi -q || true; } && { nvidia-smi topo -m || true; }jenkins/L0_Test.groovy (3)
363-363
: Make GPU diagnostics best-effort to avoid aborting stages.As in the Slurm script, chaining with
&&
makes the whole step fail if any one probe fails. Prefer non-fatal probes.-sh "nvidia-smi && nvidia-smi -q && nvidia-smi topo -m" +sh "{ nvidia-smi || true; } && { nvidia-smi -q || true; } && { nvidia-smi topo -m || true; }"Also applies to: 1047-1047, 1456-1456, 1832-1832
188-191
: Ensure destination path exists when renaming %j log.If
${slurmOutputFile}
includes subdirs that differ once%j
is replaced,mv
can fail. Create the destination dir first.- def newSlurmOutputFile = slurmOutputFile.replace("%j", slurmJobID) - Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"mv ${slurmOutputFile} ${newSlurmOutputFile} || true\"")) + def newSlurmOutputFile = slurmOutputFile.replace("%j", slurmJobID) + def newDirCmd = "mkdir -p \\\"$(dirname ${newSlurmOutputFile})\\\"" + Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"${newDirCmd} && mv ${slurmOutputFile} ${newSlurmOutputFile} || true\""))
966-973
: Hostname fallback robustness.Add a final fallback to
hostname
in case-f
isn’t configured.- String hostNodeName = sh(script: ''' + String hostNodeName = sh(script: ''' if [ -n "$HOST_NODE_NAME" ]; then echo "$HOST_NODE_NAME" else - hostname -f + hostname -f || hostname fi ''', returnStdout: true).trim()
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
jenkins/L0_Test.groovy
(14 hunks)jenkins/scripts/slurm_run.sh
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Filenames compiled into a target must be case-insensitively unique
Files:
jenkins/scripts/slurm_run.sh
jenkins/L0_Test.groovy
🧠 Learnings (2)
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
PR: NVIDIA/TensorRT-LLM#7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.
Applied to files:
jenkins/scripts/slurm_run.sh
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
jenkins/scripts/slurm_run.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (5)
jenkins/L0_Test.groovy (5)
157-162
: Broadened Slurm job ID parsing looks good.
1886-1902
: Benign Docker kill errors are correctly suppressed.
2068-2071
: Add mapping for GB200-PyTorch-2 or update references
Only “GB200-PyTorch-1” is defined in jenkins/L0_Test.groovy (lines 2068–2071 and 2075–2085), but recent runs reference “GB200-PyTorch-2”. Add the sibling stage for “-2” with the appropriate config or remove/update any “-2” references.
1919-1919
: Signature change ripple-check passed. No remaining calls tolaunchTestJobs
with three arguments found.
351-379
: Remove unuseddockerGpuOption
or rename to match the useddockerGPUOption
—no runtime break
- Assignment and use of
dockerGPUOption
are consistent; the initialdef dockerGpuOption
is never used (learnscripting.org, docs.groovy-lang.org)- If desired, declare
def dockerGPUOption = ""
upfront to avoid dynamic property creation (learnscripting.org, docs.groovy-lang.org)- (Optional) Simplify
--gpus
quoting:echo "--gpus device=$NV_GPU"
(docs.nvidia.com)- (Optional) Consistently use
--cap-add=SYSLOG
(docs.docker.com)Likely an incorrect or invalid review comment.
PR_Github #17262 [ run ] completed with state |
Signed-off-by: Yanchao Lu <[email protected]>
69e0679
to
ec38957
Compare
/bot run --stage-list "A10-PyTorch-1" |
/bot run --stage-list "A10-PyTorch-1, A10-PackageSanityCheck-PY310-UB2204" |
PR_Github #17298 [ run ] triggered by Bot |
PR_Github #17298 [ run ] completed with state |
/bot skip --comment "Partial testing is sufficient" |
PR_Github #17323 [ skip ] triggered by Bot |
PR_Github #17323 [ skip ] completed with state |
…CI codes (NVIDIA#7417) Signed-off-by: Yanchao Lu <[email protected]>
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests
Documentation
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.