Skip to content

Conversation

chzblych
Copy link
Collaborator

@chzblych chzblych commented Aug 31, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Prevented false failures by hardening scheduler output parsing and tolerating missing files during cleanup.
    • Gracefully handles missing job IDs without aborting the pipeline.
  • Refactor

    • Improved cleanup flow with retries and better error handling, including ignoring a known benign error.
    • Enhanced single- and multi-node teardown timing for more reliable shutdowns.
    • Standardized logging and command formatting.
  • Chores

    • Updated shared library reference.
    • Adopted dynamic log path handling for multi-node jobs to improve log management.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@chzblych
Copy link
Collaborator Author

/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17087 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17087 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #12845 (Partly Tested) completed with status: 'FAILURE'

@chzblych chzblych force-pushed the yanchaol-slurm-output branch from 8cc704a to 095226c Compare August 31, 2025 13:46
@chzblych
Copy link
Collaborator Author

/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17098 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17098 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #12855 (Partly Tested) completed with status: 'FAILURE'

@chzblych
Copy link
Collaborator Author

/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17102 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17102 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12858 (Partly Tested) completed with status: 'FAILURE'

@chzblych chzblych force-pushed the yanchaol-slurm-output branch from 095226c to 68030c6 Compare August 31, 2025 16:00
@chzblych
Copy link
Collaborator Author

/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17112 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17112 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12866 (Partly Tested) completed with status: 'FAILURE'

@chzblych chzblych force-pushed the yanchaol-slurm-output branch from 68030c6 to ed81656 Compare August 31, 2025 17:00
@chzblych
Copy link
Collaborator Author

/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast

@chzblych chzblych force-pushed the yanchaol-slurm-output branch from ed81656 to b11f258 Compare August 31, 2025 17:02
@chzblych
Copy link
Collaborator Author

/bot run --stage-list "GB200-4_GPUs-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-1, DGX_B200-8_GPUs-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17115 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17115 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12869 (Partly Tested) completed with status: 'SUCCESS'

@chzblych chzblych marked this pull request as ready for review September 1, 2025 00:38
@chzblych chzblych requested review from a team as code owners September 1, 2025 00:38
Copy link
Contributor

coderabbitai bot commented Sep 1, 2025

📝 Walkthrough

Walkthrough

Updates jenkins/L0_Test.groovy to adjust shared library tag, harden SLURM job ID/log handling, make cleanup more tolerant (rm -rf || true), add node destroy steps for single-node flow, and wrap SLURM cleanup in retries with targeted exception handling. Paths for SLURM logs now come from SlurmConfig.

Changes

Cohort / File(s) Summary
Shared library reference
jenkins/L0_Test.groovy
Updated bloom-jenkins-shared-lib import tag from main to dev-yanchaol-slurm-output; trtllm-jenkins-shared-lib remains on main.
SLURM job ID and log path handling
jenkins/L0_Test.groovy
Hardened slurmJobID extraction (tail -n1 || true), removed strict post-submit abort on missing ID, conditional handling when job ID absent, dynamic slurmOutputFile via SlurmConfig.getOutputFilePath("/home/svc_tensorrt/slurm-logs", jobUID), updated log path substitutions (%j -> ID).
Cleanup robustness (remote ops)
jenkins/L0_Test.groovy
Appended || true to remote rm -rf commands to ignore missing files; notes added about tee behavior.
Single-node node lifecycle
jenkins/L0_Test.groovy
Added CloudManager.destroyNode with delays before/after to ensure shutdown in single-node cleanup path.
Retry and error handling for cleanup
jenkins/L0_Test.groovy
Wrapped “Clean up SLURM Resources” stage in retry(3) with try/catch; ignores known “Failed to kill container” error, rethrows others; similar retry logic applied to multi-node cleanup.
Stylistic/logging adjustments
jenkins/L0_Test.groovy
Minor stage name tweaks, consistent quoting of remote commands, comments/TODOs added; logs updated to reflect new SLURM log pathing.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant J as Jenkins Pipeline
  participant S as SLURM Controller
  participant N as Worker Node
  participant C as CloudManager

  Note over J: runLLMTestlistOnSlurm (submission)
  J->>S: sbatch (with output %j)
  S-->>J: job submission output
  J->>J: Extract slurmJobID (tail -n1 \|\| true)
  alt job ID present
    J->>J: Derive log path via SlurmConfig.getOutputFilePath(...)
    J->>J: Substitute %j with job ID
  else no job ID
    J->>J: Log absence of job ID and continue
  end

  Note over J: Execution and post-run cleanup
  loop retry up to 3
    J->>N: remote cleanup (rm -rf ... \|\| true)
    J->>S: scancel / scontrol cleanup
    opt single-node flow
      J->>C: destroyNode(node)
      J->>C: wait (delays before/after)
    end
    J-->>J: if “Failed to kill container” -> ignore
  end
Loading
sequenceDiagram
  autonumber
  participant J as Jenkins (Multi-node flow)
  participant S as SLURM
  participant Ns as Nodes (multi)
  Note over J,S: Multi-node submission & log path
  J->>S: sbatch (multi-node)
  S-->>J: submission output
  J->>J: Extract job ID (tolerant)
  J->>J: slurmOutputFile = SlurmConfig.getOutputFilePath(..., jobUID)
  J->>J: Update log file path if ID available

  Note over J,Ns: Multi-node cleanup with retries
  loop retry up to 3
    J->>Ns: rm -rf ... \|\| true
    J->>S: scancel/scontrol operations
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • tburt-nv
  • kxdc
  • ruodil
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
jenkins/L0_Test.groovy (2)

325-331: Avoid 15h wait when submission failed; add a short “agent-online” probe, then abort.

Silently proceeding without a job ID risks a very long wait loop. Probe briefly for the agent, then fail fast if still offline.

-                if (!slurmJobID || !slurmJobID.isNumber()) {
-                    echo "Slurm job did not submit successfully. No job ID found.\nSubmission output:\n${slurmSubmitOutput}"
-                }
+                if (!slurmJobID || !slurmJobID.isNumber()) {
+                    echo "No valid Slurm job ID parsed. Will probe for agent for 3 minutes before aborting.\nSubmission output:\n${slurmSubmitOutput}"
+                    int probe = 0
+                    while (!CloudManager.isNodeOnline(nodeName) && probe < 3) {
+                        sleep(time: 1, unit: 'MINUTES'); probe++
+                    }
+                    if (!CloudManager.isNodeOnline(nodeName)) {
+                        error "SLURM submission likely failed (no job ID and agent offline). Aborting early."
+                    }
+                }

496-515: Pipeline may report success when srun fails; enforce pipefail and create log dir.

Without set -o pipefail, a failing srun can still yield status 0 if tee succeeds. Also ensure the logs directory exists.

-                // TODO: check if the tee always returns 0
+                // Ensure we capture srun failures even when tee succeeds, and ensure log dir exists.
                 def scriptContent = """#!/bin/bash
+                    set -o pipefail
                     export jobWorkspace=$jobWorkspace
                     export tarName=$tarName
                     export llmTarfile=$llmTarfile
                     export llmSrcNode=$llmSrcNode
                     export stageName=$stageName
                     export testList=$testList
                     export testListPathNode=$testListPathNode
                     export waivesListPathNode=$waivesListPathNode
                     export pytestTestTimeout=$pytestTestTimeout
                     export splits=$splits
                     export splitId=$splitId
                     export perfMode=$perfMode
                     export resourcePathNode=$resourcePathNode
                     export MODEL_CACHE_DIR=$MODEL_CACHE_DIR
                     export NVIDIA_IMEX_CHANNELS=0
                     chmod +x ${scriptRunNode}
-                    ${srunCmd} 2>&1 | tee ${slurmOutputFile}
+                    mkdir -p "$(dirname "${slurmOutputFile}")"
+                    ${srunCmd} 2>&1 | tee "${slurmOutputFile}"
                 """.stripIndent()
🧹 Nitpick comments (5)
jenkins/L0_Test.groovy (5)

1-1: Confirm using a dev-tagged shared lib in CI.

Pinning to bloom-jenkins-shared-lib@dev-yanchaol-slurm-output can change underneath and break CI unexpectedly. Prefer an immutable commit SHA or a release tag, or gate via a parameter for quick rollback.


182-189: Only rename slurm log when %j is present; avoid no-op mv noise.

Guard the rename to cases where the path contains %j, and skip otherwise.

-        if (!slurmJobID || !slurmJobID.isNumber()) {
+        if (!slurmJobID || !slurmJobID.isNumber()) {
             Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"cat ${slurmOutputFile} || true\""))
             echo "Slurm job did not submit successfully. No job ID found."
-        } else {
-            def newSlurmOutputFile = slurmOutputFile.replace("%j", slurmJobID)
-            Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"mv ${slurmOutputFile} ${newSlurmOutputFile} || true\""))
-        }
+        } else if (slurmOutputFile.contains("%j")) {
+            def newSlurmOutputFile = slurmOutputFile.replace("%j", slurmJobID)
+            Utils.exec(pipeline, script: Utils.sshUserCmd(remote, "\"mv ${slurmOutputFile} ${newSlurmOutputFile} || true\""))
+        } else {
+            echo "No %j placeholder in ${slurmOutputFile}; skip rename."
+        }

204-209: Destroying node before scancel can sever control; scancel first, then destroy.

Current order may kill the agent before SLURM is cancelled. Prefer: scancel/sacct/scontrol → destroy node → sleep.

-        Utils.exec(pipeline, script: "echo Sleeping to allow docker stop; sleep 30")
-
-        CloudManager.destroyNode(nodeName)
-
-        Utils.exec(pipeline, script: "echo Sleeping to allow node destruction; sleep 30")
+        Utils.exec(pipeline, script: "echo Sleeping to allow docker stop; sleep 30")
+        // scancel happens below (Lines 214-220) first, then:
+        // CloudManager.destroyNode(nodeName) after scancel finishes.

Outside this hunk, move:

CloudManager.destroyNode(nodeName)
Utils.exec(pipeline, script: "echo Sleeping to allow node destruction; sleep 30")

to immediately after the scancel/sacct/scontrol block.


373-379: Narrow the catch and match more defensively.

Catching all Exceptions risks hiding unrelated issues. Consider catching a narrower type (e.g., AbortException/IOException) and matching the message case-insensitively.

-    } catch (Exception e) {
-        if (e.getMessage()?.contains("Failed to kill container")) {
+    } catch (hudson.AbortException | java.io.IOException e) {
+        if (e.getMessage()?.toLowerCase()?.contains("failed to kill container")) {
             echo "Known benign error ignored: ${e.getMessage()}"
         } else {
             throw e // Re-throw if it's a different IOException
         }

166-172: Call scancel only when the job ID is valid.

Currently scancel ${slurmJobID} runs even when the ID is empty/invalid (tolerated by || true but noisy). Check validity first.

-        Utils.exec(
-            pipeline,
-            script: Utils.sshUserCmd(
-                remote,
-                "\"scancel ${slurmJobID} || true; sacct -j ${slurmJobID} --format=JobID,JobName%100,Partition%15,Account%15,State,ExitCode,NodeList%30 || true; scontrol show job ${slurmJobID} || true\""
-            )
-        )
+        if (slurmJobID && slurmJobID.isNumber()) {
+            Utils.exec(
+                pipeline,
+                script: Utils.sshUserCmd(
+                    remote,
+                    "\"scancel ${slurmJobID} || true; sacct -j ${slurmJobID} --format=JobID,JobName%100,Partition%15,Account%15,State,ExitCode,NodeList%30 || true; scontrol show job ${slurmJobID} || true\""
+                )
+            )
+        } else {
+            echo "Skip scancel/sacct/scontrol due to missing/invalid job ID."
+        }

Also applies to: 182-189

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ec595a8 and b11f258.

📒 Files selected for processing (1)
  • jenkins/L0_Test.groovy (10 hunks)
🔇 Additional comments (5)
jenkins/L0_Test.groovy (5)

149-161: Safer parsing is fine; tolerate empty output.

Adding tail -n1 || true avoids cleanup hard-failing when no ID is present. LGTM.


178-180: Cleanup robustness

rm -rf ... || true is appropriate here to avoid failing on missing paths.


226-226: Cleanup robustness

Good addition to ignore missing agent artifacts on the remote.


380-389: Cleanup retries look good.

Wrapping cleanup in retry with explicit error signaling is appropriate.


538-547: Multi-node cleanup retries look good.

Mirroring the single-node retry pattern is appropriate.

@chzblych chzblych force-pushed the yanchaol-slurm-output branch from b11f258 to b998c75 Compare September 1, 2025 02:35
@chzblych
Copy link
Collaborator Author

chzblych commented Sep 1, 2025

/bot skip --comment "Partial testing is sufficient"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17138 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17138 [ skip ] completed with state SUCCESS
Skipping testing for commit b998c75

@chzblych chzblych merged commit c5148f5 into NVIDIA:main Sep 1, 2025
5 checks passed
@chzblych chzblych deleted the yanchaol-slurm-output branch September 1, 2025 02:57
chzblych added a commit to chzblych/TensorRT-LLM that referenced this pull request Sep 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants