Skip to content

Conversation

amitz-nv
Copy link
Collaborator

@amitz-nv amitz-nv commented Jun 30, 2025

Description

Changes:

  1. Fixed BindCapacityScheduler to pass peft_cache_manager to the CPP binding.
    1.1. Fixed BindCapacityScheduler constructions accordingly.
  2. Fixed PeftCacheManager.free_resources to call mark_request_done
  3. Changed LoRA "prevent serialization of entire LoRA adapters in each request" optimization to check whether an adapter was loaded in the LoRA CPU cache directly, instead of the additional python dict adapter cache in LoraManager.
    3.1. Removed support for this optimization for non-torch flow.
    3.2. Added LoraManager.is_adapter_in_cpu_cache method.
    3.3. Added optional cpp_peft_cache_manager argument to LoraManager constructor, used by its newly added is_adapter_in_cpu_cache method.
    3.4. Changed GenerationExecutorWorker in torch flow only, to get the CPP peft cache manager and pass it to LoraManager constructor.
  4. Added tests of lora eviction to test_llm.py and to test_llm_pytorch.py.
  5. Refactored existing LoRA tests in test_llm.py and test_llm_pytorch.py to a separate file, saving duplication.
  6. Added a detailed "not supported note" in the thrown exception in the unsupported flow of handling a LoRA request that has no LoRA weights/config and not found in LoRA CPU cache that was introduced in [TRTLLM-5921][feat] Prevent serialization of entire LoRA adapters in each request #5080.
    6.1. For pytorch flow - Changed PeftCacheManager::determineNumPages to throw a PeftTaskNotCachedException with a detailed "not supported note" when the request has no lora weights, no lora config, and its lora adapter is not found in cache.
    6.2. For TRT flow - Appended a detailed "not supported note" to the error message thrown in PeftCacheManager::addRequestPeft when the request has no lora weights or no lora config and not lora adapter not found in cache. REVERTED, as the optimization was disabled for non-torch flow.

Test Coverage

  • test_llm.py::test_llama_7b_multi_lora_evict_load_new_adapters
  • test_llm_pytorch.py::test_llama_7b_multi_lora_evict_load_new_adapters
  • test_llm_pytorch.py::test_llama_7b_multi_lora_load_previously_cpu_cache_evicted_adapter_fails

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Added ability to check if a LoRA adapter is present in the CPU cache.
    • Enhanced LoRA adapter management for both PyTorch and TensorRT backends, improving cache integration and resource handling.
    • Introduced new utility functions and context managers for test subprocess management and list operations.
  • Bug Fixes

    • Improved error messages and exception handling when LoRA adapters are missing or evicted from cache.
  • Tests

    • Replaced and expanded LoRA adapter tests to cover adapter eviction, cache reloading, and failure scenarios.
    • Added new test utilities for multi-LoRA adapter scenarios and subprocess execution with output capture.
  • Documentation

    • Added explanatory comments to clarify backend-specific LoRA behavior and cache limitations.

@amitz-nv amitz-nv requested a review from shaharmor98 June 30, 2025 14:59
@amitz-nv amitz-nv force-pushed the dev-support-pytorch-lora-adapter-eviction branch from c86896a to 709ff70 Compare July 3, 2025 15:41
@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10850 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10850 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8018 completed with status: 'FAILURE'

@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 6, 2025

/bot run --gpu-type A100X --disable-multi-gpu-test --post-merge

1 similar comment
@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 6, 2025

/bot run --gpu-type A100X --disable-multi-gpu-test --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11046 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11046 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8166 (Partly Tested) completed with status: 'FAILURE'

@amitz-nv amitz-nv force-pushed the dev-support-pytorch-lora-adapter-eviction branch from 495f319 to 523efb7 Compare July 6, 2025 15:17
@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 6, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11059 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11059 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8177 completed with status: 'FAILURE'

@amitz-nv amitz-nv force-pushed the dev-support-pytorch-lora-adapter-eviction branch from beeb0a3 to 45fb302 Compare July 7, 2025 07:23
@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11117 [ run ] triggered by Bot

@amitz-nv amitz-nv marked this pull request as ready for review July 7, 2025 11:31
@amitz-nv amitz-nv requested review from a team as code owners July 7, 2025 11:31
@amitz-nv amitz-nv requested review from Fridah-nv and achartier July 7, 2025 11:31
@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 7, 2025

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11148 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11117 [ run ] completed with state ABORTED

@amitz-nv amitz-nv requested a review from omera-nv July 7, 2025 11:47
@tensorrt-cicd
Copy link
Collaborator

PR_Github #11148 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8243 completed with status: 'FAILURE'

Copy link
Collaborator

@Fridah-nv Fridah-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AutoDeploy change LGTM

@amitz-nv
Copy link
Collaborator Author

amitz-nv commented Jul 8, 2025

/bot run

This was referenced Jul 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants