Skip to content

Conversation

symphonylyh
Copy link
Collaborator

@symphonylyh symphonylyh commented May 26, 2025

Description

NOTE: this PR provides a problem statement and proof-of-concept solutions to the problem. We plan to address it more formally in follow-up PRs in #4799 and #5522, please take a look at them! this PR will be closed after the refactoring is completely done.

Existing workflow:

  • vision forward on main process, LLM forwrad on worker process. multimodal embedding is transferred during IPC.
  • many IPC D2H & H2D overhead due to the queue structure being used. Tensor pickle serialization & deserialization is involved
  • vision forward is per-request run, so only allows BS=1 for vision
    old
    Above is the nsys for processing 3 images

Improved workflow:

  • vision & LLM in the same forward, on worker process. raw image tensor is transferred during IPC.
  • allows BS > 1 for vision. although enforces vision batch to immediately finish the context/prefill phase, cannot be prefilled later.
  • needs to modify attn_medata struct inside forward. the overhead from the prepare_multimodal_ifb() call is negligible from nsys profile. but caveat is this is not yet compatible with cuda graph mode.
  • the IFB token capacity estimate may be incorrect during request scheduling, because vision context length is unknown at scheduling time
    new

Notes:

  • use TLLM_MULTIMODAL_DISAGGREGATED to test old (=1) & new (=0, by default) for now
  • use llava-next as the experimental example
  • --user_cuda_graph doesn't work with this IFB mode because attn_metadata.prepare() needs to be called inside forward()

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@symphonylyh symphonylyh requested review from a team as code owners May 26, 2025 21:43
@symphonylyh symphonylyh changed the title [TRTLLM-4958[feat] improve multimodal workflow with Vision + LLM inflight batching [TRTLLM-4958][feat] improve multimodal workflow with Vision + LLM inflight batching May 26, 2025
@NVIDIA NVIDIA deleted a comment from tensorrt-cicd May 26, 2025
@symphonylyh
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6509 [ run ] triggered by Bot

@NVIDIA NVIDIA deleted a comment from tensorrt-cicd May 26, 2025
@NVIDIA NVIDIA deleted a comment from tensorrt-cicd May 26, 2025
@NVIDIA NVIDIA deleted a comment from tensorrt-cicd May 26, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #6509 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #4764 completed with status: 'FAILURE'

Signed-off-by: symphonylyh <[email protected]>
Signed-off-by: Haohang Huang <[email protected]>
Signed-off-by: Haohang Huang <[email protected]>
@MartinMarciniszyn
Copy link
Collaborator

@symphonylyh , is this PR still relevant, or should we close it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants