Skip to content

Conversation

dolpm
Copy link
Contributor

@dolpm dolpm commented Aug 27, 2025

Summary

FIX #25013

before

python3 benchmarks/kernels/benchmark_lora.py model_bench --models meta-llama/Llama-3-8b  --arg-pool-size 32 --batch-sizes 1 16 32 --dtype torch.float16  --lora-ranks 16 --num-loras 1 4 --op-types lora_shrink lora_expand --seq-lengths 1 16 --sort-by-lora-id 1 --cuda-graph-nops 32 
INFO 08-27 14:05:28 [__init__.py:241] Automatically detected platform cuda.
Namespace(cmd='model_bench', models=['meta-llama/Llama-3-8b'], tp_sizes=[1], lora_ranks=[16], dtype=torch.float16, arg_pool_size=32, cuda_graph_nops=32, num_loras=[1, 4], num_active_loras=None, sort_by_lora_id=[True], op_types=[<OpType.LORA_SHRINK: 1>, <OpType.LORA_EXPAND: 2>], seq_lengths=[1, 16], batch_sizes=[1, 16, 32], expand_fn_add_inputs=[True, False], output_directory=None, test_correctness=False, func=<function run_model_bench at 0x7fe10b912200>)
Model bench :
 Hidden Sizes {6144, 4096, 28672} LoRA Ranks [16]
Benchmarking 32 invocations inside a CUDA Graph
Traceback (most recent call last):
  File "/data/users/$USER/vllm/benchmarks/kernels/benchmark_lora.py", line 1065, in <module>
    args.func(args)
  File "/data/users/$USER/vllm/benchmarks/kernels/benchmark_lora.py", line 918, in run_model_bench
    run(args, bench_contexts)
  File "/data/users/$USER/vllm/benchmarks/kernels/benchmark_lora.py", line 793, in run
    bench_optype(
  File "/data/users/$USER/vllm/benchmarks/kernels/benchmark_lora.py", line 642, in bench_optype
    op_type.bench_fn()(**kwargs)
  File "/data/users/$USER/pytorch/torch/_ops.py", line 1254, in __call__
    return self._op(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: vllm::lora_shrink() is missing value for argument 'no_lora_flag_cpu'. Declaration: vllm::lora_shrink(Tensor inputs, Tensor[] lora_a_weights, Tensor(a2!) output_tensor, Tensor token_lora_mapping, Tensor token_indices_sorted_by_lora_ids, Tensor num_tokens_per_lora, Tensor lora_token_start_loc, Tensor lora_ids, Tensor no_lora_flag_cpu, float scaling) -> ()

after

python3 benchmarks/kernels/benchmark_lora.py model_bench --models meta-llama/Llama-3-8b  --arg-pool-size 32 --batch-sizes 1 16 32 --dtype torch.float16  --lora-ranks 16 --num-loras 1 4 --op-types lora_shrink lora_expand --seq-lengths 1 16 --sort-by-lora-id 1 --cuda-graph-nops 32 
INFO 08-27 13:24:22 [__init__.py:241] Automatically detected platform cuda.
Namespace(cmd='model_bench', models=['meta-llama/Llama-3-8b'], tp_sizes=[1], lora_ranks=[16], dtype=torch.float16, arg_pool_size=32, cuda_graph_nops=32, num_loras=[1, 4], num_active_loras=None, sort_by_lora_id=[True], op_types=[<OpType.LORA_SHRINK: 1>, <OpType.LORA_EXPAND: 2>], seq_lengths=[1, 16], batch_sizes=[1, 16, 32], expand_fn_add_inputs=[True, False], output_directory=None, test_correctness=False, func=<function run_model_bench at 0x7f62e6ae6160>)
Model bench :
 Hidden Sizes {6144, 4096, 28672} LoRA Ranks [16]
Benchmarking 32 invocations inside a CUDA Graph
[------------------------------------------------------------------------------------------------------------------------------ lora-torch.float16 | cugraph 32 ops -------------------------------------------------------------------------------------------------------------------------------]
                                                                                                             |  single-lora roofline using torch.mm (f16xf16=>f16)  |  LORA_SHRINK() (f16xf16=>f32)  |  LORA_EXPAND(add_inputs=True) (f32xf16=>f16)  |  LORA_EXPAND(add_inputs=False) (f32xf16=>f16)
1 threads: -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      {"bs": 1, "sl": 1, "m": 1, "k": 6144, "n": 16, "num_loras": 1, "sort_by_lora": true, "num_slices": 1}  |                        176.5                         |             129.6              |                                               |                                              
      {"bs": 1, "sl": 1, "m": 1, "k": 6144, "n": 16, "num_loras": 1, "sort_by_lora": true, "num_slices": 2}  |                        176.7                         |             145.1              |                                               |                                              
      {"bs": 1, "sl": 1, "m": 1, "k": 6144, "n": 16, "num_loras": 1, "sort_by_lora": true, "num_slices": 3}  |                        188.3                         |             169.1              |                                               |                                              
      {"bs": 1, "sl": 1, "m": 1, "k": 16, "n": 6144, "num_loras": 1, "sort_by_lora": true, "num_slices": 1}  |                         81.9                         |                                |                     135.7                     |                     126.0                    
      {"bs": 1, "sl": 1, "m": 1, "k": 16, "n": 6144, "num_loras": 1, "sort_by_lora": true, "num_slices": 2}  |                         92.6                         |                                |                     373.6                     |                     331.4                    
      {"bs": 1, "sl": 1, "m": 1, "k": 16, "n": 6144, "num_loras": 1, "sort_by_lora": true, "num_slices": 3}  |                         97.1                         |                                |                     407.2                     |                     356.2                    

Times are in microseconds (us).


Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added the performance Performance-related issues label Aug 27, 2025
@dolpm dolpm marked this pull request as ready for review August 27, 2025 21:17
Signed-off-by: Dylan Maloy <[email protected]>

Signed-off-by: Dylan Maloy <[email protected]>
@jeejeelee jeejeelee added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 17, 2025
Copy link
Collaborator

@jeejeelee jeejeelee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for missing this PR.
LGTM, thank you for contribution

@DarkLight1337 DarkLight1337 merged commit 1b962e2 into vllm-project:main Sep 17, 2025
28 checks passed
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: Dylan Maloy <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Signed-off-by: charlifu <[email protected]>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: Dylan Maloy <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Signed-off-by: xuebwang-amd <[email protected]>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: benchmark_lora.py run fail with RuntimeError: vllm::lora_shrink() is missing value for argument 'no_lora_flag_cpu' error

3 participants