Skip to content

Conversation

LinPoly
Copy link
Collaborator

@LinPoly LinPoly commented Jul 17, 2025

…al requests for pytorch backend (#5541)"

This reverts commit 388b491.

Summary by CodeRabbit

  • Bug Fixes

    • Simplified error handling for prompt length exceeding the maximum allowed tokens, ensuring consistent error messages across backends.
  • Tests

    • Unified and streamlined tests for prompt length validation, removing backend-specific test cases and redundant code.
    • Removed obsolete test functions related to backend-specific behavior.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@LinPoly LinPoly requested a review from kaiyux July 17, 2025 12:02
@LinPoly LinPoly self-assigned this Jul 17, 2025
@LinPoly LinPoly requested a review from a team as a code owner July 17, 2025 12:02
Copy link
Contributor

coderabbitai bot commented Jul 17, 2025

Walkthrough

The changes remove PyTorch-backend-specific validation logic from the main LLM implementation and simplify related test code. Test helpers and test cases that handled PyTorch-specific branches are refactored or deleted, unifying error handling and reducing backend-specific code paths. Error messages are also streamlined for clarity.

Changes

File(s) Change Summary
tensorrt_llm/llmapi/llm.py Removed PyTorch-specific argument validation in _check_arguments; simplified error message.
tests/unittest/llmapi/test_llm.py Simplified _test_llm_capture_request_error by removing PyTorch logic; updated test function.
tests/unittest/llmapi/test_llm_multi_gpu.py Removed pytorch_backend argument from helper function call in test.
tests/unittest/llmapi/test_llm_multi_gpu_pytorch.py,
tests/unittest/llmapi/test_llm_pytorch.py
Removed import and PyTorch-specific test function for error handling.

Poem

In the warren of code, a spring cleaning day,
PyTorch paths swept quietly away.
Tests now hop with lighter feet,
No backend forks, just code that's neat.
Error checks are simple, true—
A streamlined meadow, fresh and new!
🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9323de6 and 586fec2.

📒 Files selected for processing (5)
  • tensorrt_llm/llmapi/llm.py (1 hunks)
  • tests/unittest/llmapi/test_llm.py (1 hunks)
  • tests/unittest/llmapi/test_llm_multi_gpu.py (1 hunks)
  • tests/unittest/llmapi/test_llm_multi_gpu_pytorch.py (0 hunks)
  • tests/unittest/llmapi/test_llm_pytorch.py (1 hunks)
💤 Files with no reviewable changes (1)
  • tests/unittest/llmapi/test_llm_multi_gpu_pytorch.py
🔇 Additional comments (5)
tests/unittest/llmapi/test_llm_pytorch.py (1)

7-12: LGTM: Import cleanup aligns with PyTorch backend simplification.

The removal of _test_llm_capture_request_error import is consistent with the broader effort to unify error handling across backends and remove PyTorch-specific validation logic.

tests/unittest/llmapi/test_llm_multi_gpu.py (1)

454-454: LGTM: Function call updated to match simplified helper signature.

The removal of the pytorch_backend=False argument correctly reflects the simplification of _test_llm_capture_request_error to use unified error handling across backends.

tensorrt_llm/llmapi/llm.py (1)

548-548: LGTM: Error message simplified to use direct attribute.

The change from using a JSON-loaded max_seq_len value to build_config.max_seq_len directly simplifies the code while maintaining the same functionality. This aligns with the broader effort to remove backend-specific validation logic.

tests/unittest/llmapi/test_llm.py (2)

2063-2077: LGTM! Simplified test helper aligns with revert objective.

The removal of PyTorch backend-specific conditional logic from _test_llm_capture_request_error simplifies the test while maintaining the core functionality. The test correctly uses a 65-token prompt against a 64-token limit to trigger the expected RequestError.


2080-2080: LGTM! Function call updated consistently with helper simplification.

The removal of the pytorch_backend argument from the function call is consistent with the simplified helper function implementation.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@LinPoly
Copy link
Collaborator Author

LinPoly commented Jul 17, 2025

/bot run

@LinPoly LinPoly changed the title Revert "[nvbug 5304752][fix] enhance _check_arguments to filter illeg… [fix]: Revert commit 388b491 Jul 17, 2025
@kaiyux kaiyux requested a review from litaotju July 17, 2025 12:04
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12202 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12202 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #254 completed with status: 'FAILURE'

@LinPoly LinPoly force-pushed the user/pengyunl/revert_check branch from 45b8d3c to f5719de Compare July 17, 2025 13:55
@LinPoly
Copy link
Collaborator Author

LinPoly commented Jul 17, 2025

/bot run

@kaiyux kaiyux enabled auto-merge (squash) July 17, 2025 13:59
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12208 [ run ] triggered by Bot

@chzblych
Copy link
Collaborator

@coderabbitai summary

Copy link
Contributor

coderabbitai bot commented Jul 17, 2025

✅ Actions performed

Summary regeneration triggered.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12208 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #255 completed with status: 'FAILURE'

@kaiyux
Copy link
Member

kaiyux commented Jul 18, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12257 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12257 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #257 completed with status: 'FAILURE'

…al requests for pytorch backend (NVIDIA#5541)"

This reverts commit 388b491.

Signed-off-by: Pengyun Lin <[email protected]>
@LinPoly LinPoly force-pushed the user/pengyunl/revert_check branch from f5719de to 586fec2 Compare July 18, 2025 06:24
@LinPoly
Copy link
Collaborator Author

LinPoly commented Jul 18, 2025

/bot run --stage-list "H100_PCIe-PyTorch-3"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12286 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12286 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #258 (Partly Tested) completed with status: 'SUCCESS'

@kaiyux
Copy link
Member

kaiyux commented Jul 18, 2025

/bot skip --comment "pipeline passed after rerun"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12306 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12306 [ skip ] completed with state SUCCESS
Skipping testing for commit 586fec2

@kaiyux kaiyux merged commit ab4e178 into NVIDIA:release/0.21 Jul 18, 2025
3 checks passed
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 21, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 21, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 21, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 22, 2025
dc3671 pushed a commit that referenced this pull request Jul 22, 2025
LinPoly added a commit to LinPoly/TensorRT-LLM that referenced this pull request Jul 23, 2025
This reverts commit 48ddc3d.

Signed-off-by: Pengyun Lin <[email protected]>
LinPoly added a commit to LinPoly/TensorRT-LLM that referenced this pull request Jul 24, 2025
This reverts commit 48ddc3d.

Signed-off-by: Pengyun Lin <[email protected]>
NVShreyas pushed a commit to NVShreyas/TensorRT-LLM that referenced this pull request Jul 28, 2025
Signed-off-by: Pengyun Lin <[email protected]>
Signed-off-by: Shreyas Misra <[email protected]>
Ransiki pushed a commit to Ransiki/TensorRT-LLM that referenced this pull request Jul 29, 2025
Signed-off-by: Pengyun Lin <[email protected]>
Signed-off-by: Ransiki Zhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants