Skip to content

Conversation

Barry-Delaney
Copy link
Collaborator

@Barry-Delaney Barry-Delaney commented Aug 4, 2025

Summary by CodeRabbit

  • New Features

    • Improved handling of small batch sizes on specific GPUs for enhanced performance.
    • Added a masked transpose operation for more flexible tensor processing.
  • Bug Fixes

    • Refined device assignment and tensor initialization for better compatibility and efficiency.
  • Refactor

    • Updated weight scale handling and transformation during model weight loading.
    • Enhanced quantization and transformation functions with new parameters for improved control.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Barry-Delaney Barry-Delaney self-assigned this Aug 4, 2025
@Barry-Delaney Barry-Delaney requested a review from a team as a code owner August 4, 2025 03:46
@Barry-Delaney Barry-Delaney requested review from hlu1 and dongxuy04 August 4, 2025 03:46
Copy link
Contributor

coderabbitai bot commented Aug 4, 2025

📝 Walkthrough

Walkthrough

The changes update the logic for FP8 GEMM operations in the linear module to handle small batch sizes differently on specific hardware (SM version 100). The quantization utility functions are enhanced to support operand swapping, masked transposition, and improved device handling. A new masked transpose Triton kernel and related function are introduced for efficient tensor manipulation. Weight loading logic in a DeepseekV3 model is modified to adjust how FP8 scale parameters are assigned and transformed, removing a conditional transformation block.

Changes

Cohort / File(s) Change Summary
FP8 GEMM Control Flow (Linear Module)
tensorrt_llm/_torch/modules/linear.py
Modifies FP8BlockScalesLinearMethod to add special handling for SM 100 GPUs: changes weight scale shape and dtype, conditionally swaps GEMM operands and output shape for batch sizes under 32, and applies a scale layout transformation during weight loading.
FP8 Quantization Utilities
tensorrt_llm/quantization/utils/fp8_utils.py
Updates per_token_quant_and_transform to add a swap_ab parameter controlling padding and output shape, replaces zero initialization with uninitialized tensors, and changes device assignment to input device. Adds a new Triton kernel _transpose_kernel and a masked_transpose helper for masked 2D tensor transposition.
DeepseekV3 Model Weight Loading
tensorrt_llm/_torch/models/modeling_deepseekv3.py
Adjusts weight loading in DeepseekV3ForCausalLM.load_weights to replace in-place copying of scale tensors with parameter replacement, applies scale layout transformation on fused scales, and removes a conditional transformation block for SM 100 GPUs.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant LinearMethod
    participant fp8_utils
    participant deep_gemm

    Caller->>LinearMethod: apply(input, weight, ...)
    alt get_sm_version() == 100
        alt batch size < 32
            LinearMethod->>fp8_utils: per_token_quant_and_transform(input, swap_ab=True)
            LinearMethod->>deep_gemm: fp8_gemm_nt(weight, input, ...)
            deep_gemm-->>LinearMethod: padded_output
            LinearMethod->>fp8_utils: masked_transpose(padded_output, batch_size)
            fp8_utils-->>LinearMethod: output
        else batch size >= 32
            LinearMethod->>fp8_utils: per_token_quant_and_transform(input)
            LinearMethod->>deep_gemm: fp8_gemm_nt(input, weight, ...)
            deep_gemm-->>LinearMethod: output
        end
    else other SM version
        LinearMethod->>fp8_utils: per_token_quant_and_transform(input)
        LinearMethod->>deep_gemm: fp8_gemm_nt(input, weight, ...)
        deep_gemm-->>LinearMethod: output
    end
    LinearMethod-->>Caller: output
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • feat: add support for Modelopt fp8_pb_wo quantization scheme #6106: Adds support for the Modelopt fp8_pb_wo quantization scheme by mapping it to FP8_BLOCK_SCALES and squeezes weight tensors before copying; related to FP8 block scales handling in the same linear module but focuses on quantization scheme aliasing and tensor shape adjustment.

Suggested reviewers

  • chzblych

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 67be531 and c5bfd7e.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (2 hunks)
  • tensorrt_llm/_torch/modules/linear.py (4 hunks)
  • tensorrt_llm/quantization/utils/fp8_utils.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/quantization/utils/fp8_utils.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Barry-Delaney Barry-Delaney changed the title [TRTLLM-6334] Add runtime swap AB for SM100 FP8 blockwise GEMM [TRTLLM-6334] [feat] Add runtime swap AB for SM100 FP8 blockwise GEMM Aug 4, 2025
@Barry-Delaney Barry-Delaney changed the title [TRTLLM-6334] [feat] Add runtime swap AB for SM100 FP8 blockwise GEMM [TRTLLM-6334][feat] Add new featureAdd runtime swap AB for SM100 FP8 blockwise GEMM Aug 4, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
tensorrt_llm/quantization/utils/fp8_utils.py (1)

457-461: Document the new swap_ab parameter.

The swap_ab parameter lacks documentation in the function docstring. Please add a description explaining its purpose and effect on the output dimensions.

Update the docstring to include:

"""
input shape [g, m, k]
output shape [g, m, k // 2], dtype fp8
output_scale [g, k // 4, m // 2 // 128], dtype int32
quant_group_size int
masked_m shape [g]
swap_ab: bool, if True, pads m dimension to multiple of 8 for efficient transpose operations
"""
🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/linear.py (1)

577-591: Verify the batch size threshold and document the optimization.

The implementation correctly handles small batch optimization by swapping operands and using masked transpose. However:

  1. The batch size threshold of 32 should be documented or made configurable
  2. Consider adding a comment explaining why this optimization is beneficial for small batches

Consider making the threshold configurable:

SMALL_BATCH_THRESHOLD = 32  # Empirically determined for SM100
if input.shape[0] < SMALL_BATCH_THRESHOLD:
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7bf0a48 and 5eac9b4.

📒 Files selected for processing (2)
  • tensorrt_llm/_torch/modules/linear.py (1 hunks)
  • tensorrt_llm/quantization/utils/fp8_utils.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile = ...).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL = ...).
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/quantization/utils/fp8_utils.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/quantization/utils/fp8_utils.py
🔇 Additional comments (4)
tensorrt_llm/quantization/utils/fp8_utils.py (3)

339-341: Good improvements for performance and device handling.

The changes improve efficiency by avoiding unnecessary zero-initialization and ensure correct device placement by using the input tensor's device.


481-495: Correct implementation of conditional padding logic.

The implementation properly handles the swap_ab flag to pad the m dimension when needed, and consistently uses m_padded throughout the function. The device handling improvements are also good.


537-555: Well-implemented transpose kernel.

The Triton kernel correctly implements a masked transpose operation with proper boundary handling and flexible stride support.

tensorrt_llm/_torch/modules/linear.py (1)

582-590: Verify error handling for deep_gemm.fp8_gemm_nt failures

The calls to deep_gemm.fp8_gemm_nt in both

  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py

assume the underlying C-extension will either succeed or raise a Python exception. If you need to detect timeout, out-of-memory, or other low-level failures and either recover or emit a clearer error message, wrap each deep_gemm.fp8_gemm_nt(...) invocation in a try/except block.

• Confirm what errors (if any) deep_gemm.fp8_gemm_nt may raise on failure
• Decide on fallback behavior or more descriptive logging/user feedback
• Update the two call sites to catch and handle those exceptions appropriately

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13921 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13921 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10483 completed with status: 'FAILURE'

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13937 [ run ] triggered by Bot

@Barry-Delaney Barry-Delaney changed the title [TRTLLM-6334][feat] Add new featureAdd runtime swap AB for SM100 FP8 blockwise GEMM [TRTLLM-6334][feat] Add runtime swap AB for SM100 FP8 blockwise GEMM Aug 4, 2025
@Barry-Delaney
Copy link
Collaborator Author

/bot kill

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13943 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13937 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13945 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13943 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit e16a15f

@Barry-Delaney
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13963 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13945 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13963 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit e16a15f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14939 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11277 completed with status: 'FAILURE'

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@Barry-Delaney
Copy link
Collaborator Author

/bot kill

@Barry-Delaney Barry-Delaney force-pushed the user/barry/swap_ab branch 2 times, most recently from c368be1 to e82f206 Compare August 13, 2025 03:18
@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15065 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15065 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11375 completed with status: 'FAILURE'

@Barry-Delaney Barry-Delaney force-pushed the user/barry/swap_ab branch 2 times, most recently from f83e882 to 0160ff0 Compare August 13, 2025 08:05
@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15100 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15100 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11400 completed with status: 'FAILURE'

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15149 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15149 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11440 completed with status: 'FAILURE'

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15215 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15215 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11492 completed with status: 'FAILURE'

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15228 [ run ] triggered by Bot

@Barry-Delaney
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15361 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15361 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11584 completed with status: 'FAILURE'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants