-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[None][ci] Some improvements for Slurm CI setup #7407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" |
PR_Github #17087 [ run ] triggered by Bot |
PR_Github #17087 [ run ] completed with state |
8cc704a
to
095226c
Compare
/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" |
PR_Github #17098 [ run ] triggered by Bot |
PR_Github #17098 [ run ] completed with state |
/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" |
PR_Github #17102 [ run ] triggered by Bot |
PR_Github #17102 [ run ] completed with state |
095226c
to
68030c6
Compare
/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast |
PR_Github #17112 [ run ] triggered by Bot |
PR_Github #17112 [ run ] completed with state |
68030c6
to
ed81656
Compare
/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast |
ed81656
to
b11f258
Compare
/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast |
PR_Github #17115 [ run ] triggered by Bot |
PR_Github #17115 [ run ] completed with state |
📝 WalkthroughWalkthroughUpdates jenkins/L0_Test.groovy to adjust shared library tag, harden SLURM job ID/log handling, make cleanup more tolerant (rm -rf || true), add node destroy steps for single-node flow, and wrap SLURM cleanup in retries with targeted exception handling. Paths for SLURM logs now come from SlurmConfig. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant J as Jenkins Pipeline
participant S as SLURM Controller
participant N as Worker Node
participant C as CloudManager
Note over J: runLLMTestlistOnSlurm (submission)
J->>S: sbatch (with output %j)
S-->>J: job submission output
J->>J: Extract slurmJobID (tail -n1 \|\| true)
alt job ID present
J->>J: Derive log path via SlurmConfig.getOutputFilePath(...)
J->>J: Substitute %j with job ID
else no job ID
J->>J: Log absence of job ID and continue
end
Note over J: Execution and post-run cleanup
loop retry up to 3
J->>N: remote cleanup (rm -rf ... \|\| true)
J->>S: scancel / scontrol cleanup
opt single-node flow
J->>C: destroyNode(node)
J->>C: wait (delays before/after)
end
J-->>J: if “Failed to kill container” -> ignore
end
sequenceDiagram
autonumber
participant J as Jenkins (Multi-node flow)
participant S as SLURM
participant Ns as Nodes (multi)
Note over J,S: Multi-node submission & log path
J->>S: sbatch (multi-node)
S-->>J: submission output
J->>J: Extract job ID (tolerant)
J->>J: slurmOutputFile = SlurmConfig.getOutputFilePath(..., jobUID)
J->>J: Update log file path if ID available
Note over J,Ns: Multi-node cleanup with retries
loop retry up to 3
J->>Ns: rm -rf ... \|\| true
J->>S: scancel/scontrol operations
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
jenkins/L0_Test.groovy (2)
325-331
: Avoid 15h wait when submission failed; add a short “agent-online” probe, then abort.Silently proceeding without a job ID risks a very long wait loop. Probe briefly for the agent, then fail fast if still offline.
- if (!slurmJobID || !slurmJobID.isNumber()) { - echo "Slurm job did not submit successfully. No job ID found.\nSubmission output:\n${slurmSubmitOutput}" - } + if (!slurmJobID || !slurmJobID.isNumber()) { + echo "No valid Slurm job ID parsed. Will probe for agent for 3 minutes before aborting.\nSubmission output:\n${slurmSubmitOutput}" + int probe = 0 + while (!CloudManager.isNodeOnline(nodeName) && probe < 3) { + sleep(time: 1, unit: 'MINUTES'); probe++ + } + if (!CloudManager.isNodeOnline(nodeName)) { + error "SLURM submission likely failed (no job ID and agent offline). Aborting early." + } + }
496-515
: Pipeline may report success when srun fails; enforce pipefail and create log dir.Without
set -o pipefail
, a failingsrun
can still yield status 0 iftee
succeeds. Also ensure the logs directory exists.- // TODO: check if the tee always returns 0 + // Ensure we capture srun failures even when tee succeeds, and ensure log dir exists. def scriptContent = """#!/bin/bash + set -o pipefail export jobWorkspace=$jobWorkspace export tarName=$tarName export llmTarfile=$llmTarfile export llmSrcNode=$llmSrcNode export stageName=$stageName export testList=$testList export testListPathNode=$testListPathNode export waivesListPathNode=$waivesListPathNode export pytestTestTimeout=$pytestTestTimeout export splits=$splits export splitId=$splitId export perfMode=$perfMode export resourcePathNode=$resourcePathNode export MODEL_CACHE_DIR=$MODEL_CACHE_DIR export NVIDIA_IMEX_CHANNELS=0 chmod +x ${scriptRunNode} - ${srunCmd} 2>&1 | tee ${slurmOutputFile} + mkdir -p "$(dirname "${slurmOutputFile}")" + ${srunCmd} 2>&1 | tee "${slurmOutputFile}" """.stripIndent()
🧹 Nitpick comments (5)
jenkins/L0_Test.groovy (5)
1-1
: Confirm using a dev-tagged shared lib in CI.Pinning to bloom-jenkins-shared-lib@dev-yanchaol-slurm-output can change underneath and break CI unexpectedly. Prefer an immutable commit SHA or a release tag, or gate via a parameter for quick rollback.
182-189
: Only rename slurm log when %j is present; avoid no-op mv noise.Guard the rename to cases where the path contains
%j
, and skip otherwise.- if (!slurmJobID || !slurmJobID.isNumber()) { + if (!slurmJobID || !slurmJobID.isNumber()) { Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"cat ${slurmOutputFile} || true\"")) echo "Slurm job did not submit successfully. No job ID found." - } else { - def newSlurmOutputFile = slurmOutputFile.replace("%j", slurmJobID) - Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"mv ${slurmOutputFile} ${newSlurmOutputFile} || true\"")) - } + } else if (slurmOutputFile.contains("%j")) { + def newSlurmOutputFile = slurmOutputFile.replace("%j", slurmJobID) + Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"mv ${slurmOutputFile} ${newSlurmOutputFile} || true\"")) + } else { + echo "No %j placeholder in ${slurmOutputFile}; skip rename." + }
204-209
: Destroying node before scancel can sever control; scancel first, then destroy.Current order may kill the agent before SLURM is cancelled. Prefer: scancel/sacct/scontrol → destroy node → sleep.
- Utils.exec(pipeline, script: "echo Sleeping to allow docker stop; sleep 30") - - CloudManager.destroyNode(nodeName) - - Utils.exec(pipeline, script: "echo Sleeping to allow node destruction; sleep 30") + Utils.exec(pipeline, script: "echo Sleeping to allow docker stop; sleep 30") + // scancel happens below (Lines 214-220) first, then: + // CloudManager.destroyNode(nodeName) after scancel finishes.Outside this hunk, move:
CloudManager.destroyNode(nodeName) Utils.exec(pipeline, script: "echo Sleeping to allow node destruction; sleep 30")to immediately after the scancel/sacct/scontrol block.
373-379
: Narrow the catch and match more defensively.Catching all Exceptions risks hiding unrelated issues. Consider catching a narrower type (e.g., AbortException/IOException) and matching the message case-insensitively.
- } catch (Exception e) { - if (e.getMessage()?.contains("Failed to kill container")) { + } catch (hudson.AbortException | java.io.IOException e) { + if (e.getMessage()?.toLowerCase()?.contains("failed to kill container")) { echo "Known benign error ignored: ${e.getMessage()}" } else { throw e // Re-throw if it's a different IOException }
166-172
: Call scancel only when the job ID is valid.Currently
scancel ${slurmJobID}
runs even when the ID is empty/invalid (tolerated by|| true
but noisy). Check validity first.- Utils.exec( - pipeline, - script: Utils.sshUserCmd( - remote, - "\"scancel ${slurmJobID} || true; sacct -j ${slurmJobID} --format=JobID,JobName%100,Partition%15,Account%15,State,ExitCode,NodeList%30 || true; scontrol show job ${slurmJobID} || true\"" - ) - ) + if (slurmJobID && slurmJobID.isNumber()) { + Utils.exec( + pipeline, + script: Utils.sshUserCmd( + remote, + "\"scancel ${slurmJobID} || true; sacct -j ${slurmJobID} --format=JobID,JobName%100,Partition%15,Account%15,State,ExitCode,NodeList%30 || true; scontrol show job ${slurmJobID} || true\"" + ) + ) + } else { + echo "Skip scancel/sacct/scontrol due to missing/invalid job ID." + }Also applies to: 182-189
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
jenkins/L0_Test.groovy
(10 hunks)
🔇 Additional comments (5)
jenkins/L0_Test.groovy (5)
149-161
: Safer parsing is fine; tolerate empty output.Adding
tail -n1 || true
avoids cleanup hard-failing when no ID is present. LGTM.
178-180
: Cleanup robustness
rm -rf ... || true
is appropriate here to avoid failing on missing paths.
226-226
: Cleanup robustnessGood addition to ignore missing agent artifacts on the remote.
380-389
: Cleanup retries look good.Wrapping cleanup in retry with explicit error signaling is appropriate.
538-547
: Multi-node cleanup retries look good.Mirroring the single-node retry pattern is appropriate.
Signed-off-by: Yanchao Lu <[email protected]>
b11f258
to
b998c75
Compare
/bot skip --comment "Partial testing is sufficient" |
PR_Github #17138 [ skip ] triggered by Bot |
PR_Github #17138 [ skip ] completed with state |
Signed-off-by: Yanchao Lu <[email protected]>
Summary by CodeRabbit
Bug Fixes
Refactor
Chores
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.