Skip to content

Conversation

Funatiq
Copy link
Collaborator

@Funatiq Funatiq commented Jul 2, 2025

Description

Please see commit messages for details.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10670 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10670 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #7894 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10814 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10814 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #7988 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/refactor_decoder_inputs branch from f93aa23 to e0ca359 Compare July 3, 2025 13:07
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10834 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10834 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #8004 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10853 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10853 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #8020 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/refactor_decoder_inputs branch from e0ca359 to 0d3a4e1 Compare July 4, 2025 08:02
Funatiq added 6 commits July 4, 2025 10:05
…chInputOutput

- Move speculative decoding input handling from gptDecoderBatched.cpp to makeDecodingBatchInputOutput.cpp.
- Setup medusa, explicit draft tokens and eagle inputs in makeDecodingBatchInputOutput.cpp.

Signed-off-by: Robin Kobus <[email protected]>
- Refactor the handling of Medusa, explicit draft tokens and Eagle inputs in `makeDecodingBatchInputOutput.cpp` to utilize `RuntimeBuffers` directly.
- Remove now unused member variables from `decoder_batch::Input` to streamline the codebase.
- Remove constructor from `EagleInputs` to simplify the codebase.

Signed-off-by: Robin Kobus <[email protected]>
- Move generationSteps handling from gptDecoderBatched.cpp to makeDecodingBatchInputOutput.cpp.

Signed-off-by: Robin Kobus <[email protected]>
- Set generationSteps directly in `makeDecodingBatchInputOutput.cpp`.
- Remove now unused member variables from `decoder_batch::Input` to streamline the codebase.

Signed-off-by: Robin Kobus <[email protected]>
- Introduced `getGenerationSteps` and `setGenerationSteps` methods in `DecoderState` to manage generation steps for batch requests.
- Updated Python bindings to expose the new generation steps functionality.
- Adjusted `make_decoding_batch_input_output.py` to set generation steps directly on the decoder state.

Signed-off-by: Robin Kobus <[email protected]>
@Funatiq Funatiq force-pushed the dev/refactor_decoder_inputs branch from 0d3a4e1 to 07c1d7e Compare July 4, 2025 08:05
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jul 4, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10974 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10974 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8107 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jul 4, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11006 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11006 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8131 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Funatiq Funatiq requested review from Copilot and DomBrown July 4, 2025 15:26
@Funatiq Funatiq marked this pull request as ready for review July 4, 2025 15:27
@Funatiq Funatiq requested a review from a team as a code owner July 4, 2025 15:27
@Funatiq Funatiq requested a review from achartier July 4, 2025 15:27
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors how decoding inputs (especially generationSteps and speculative decoding data) are managed, moving state handling onto DecoderState and cleaning up deprecated fields.

  • Moved generation_steps assignment from local decoding input objects to DecoderState via setter/getter.
  • Removed inline speculative decoding setup functions in gptDecoderBatched.cpp and centralized them in batch manager.
  • Updated Python bindings to expose generation_steps on DecoderState instead of the batch input struct.

Reviewed Changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated no comments.

Show a summary per file
File Description
tensorrt_llm/_torch/pyexecutor/make_decoding_batch_input_output.py Redirect generation_steps assignment to decoder_state setter
cpp/tensorrt_llm/runtime/gptDecoderBatched.cpp Removed inline speculative‐decoding branches and generationSteps setup
cpp/tensorrt_llm/runtime/decoderState.cpp Added getter/setter for generationSteps
cpp/tensorrt_llm/pybind/runtime/bindings.cpp Exposed generation_steps property on DecoderState
cpp/tensorrt_llm/batch_manager/makeDecodingBatchInputOutput.cpp Introduced shared set*Inputs helpers for speculative decoding modes
cpp/include/tensorrt_llm/runtime/iGptDecoderBatched.h Removed outdated buffers includes; added config includes
cpp/include/tensorrt_llm/runtime/decodingInput.h Cleaned up constructor definitions for EagleInputs
cpp/include/tensorrt_llm/runtime/decoderState.h Declared new getGenerationSteps/setGenerationSteps methods
Comments suppressed due to low confidence (2)

cpp/tensorrt_llm/runtime/gptDecoderBatched.cpp:115

  • The generationSteps field on dInput is no longer initialized here for beam search. You should restore setting dInput.generationSteps (e.g., when decoderState.getMaxBeamWidth() > 1) using the state’s stored steps so variable-beam search still works.
    dInput.batchSlots = input.batchSlots.at(step);

cpp/include/tensorrt_llm/runtime/iGptDecoderBatched.h:23

  • [nitpick] This header isn’t referenced in the interface declared here. Consider removing the worldConfig.h include to reduce unnecessary dependencies.
#include "tensorrt_llm/runtime/worldConfig.h"

@Funatiq Funatiq merged commit ae27261 into NVIDIA:main Jul 6, 2025
3 checks passed
@Funatiq Funatiq deleted the dev/refactor_decoder_inputs branch July 8, 2025 12:39
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 9, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 11, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 11, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 11, 2025
zhou-yuxin pushed a commit to zhou-yuxin/TensorRT-LLM that referenced this pull request Jul 15, 2025
Signed-off-by: Robin Kobus <[email protected]>
Signed-off-by: Yuxin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants