Skip to content

Conversation

Chenyaaang
Copy link
Contributor

@Chenyaaang Chenyaaang commented Apr 25, 2025

Create a script to auto tune server parameter.

This script aims to tune the best server parameter combinations to maximize throughput for given requirement. The current server parameter combination is max_num_seqs and max_num_batched_tokens. It also supports additional requirement: e2e latency and prefix cache.

Use case:

  1. Given input_len=1800, output_len=20, what's the best max_num_seqs and max_num_batched_tokens to get highest throughput?
  2. If we have latency requirement to be lower than 500ms, what are the best server parameters?
  3. If we want to reach 60% prefix cache, what are the best server parameters?

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@Chenyaaang Chenyaaang changed the title publish script to auto tune server parameter [Misc][Tools][Benchmark] Publish script to auto tune server parameters Apr 25, 2025
Copy link
Collaborator

@yaochengji yaochengji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for you contribution, @Chenyaaang !

@mgoin @robertgshaw2-redhat how do you think about this PR? Is there any plan to support service level parameter auto-tuning in vLLM community?

Signed-off-by: Chenyaaang <[email protected]>
@Chenyaaang Chenyaaang closed this Apr 30, 2025
@Chenyaaang Chenyaaang reopened this Apr 30, 2025
@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) May 1, 2025 18:44
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label May 1, 2025
@robertgshaw2-redhat robertgshaw2-redhat merged commit 9b70e2b into vllm-project:main May 1, 2025
46 checks passed
radeksm pushed a commit to radeksm/vllm that referenced this pull request May 2, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
mawong-amd pushed a commit to ROCm/vllm that referenced this pull request May 14, 2025
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants