-
-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Add a batched auto tune script #25076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a batch auto-tuning script, batch_auto_tune.sh
, which is a great addition for running multiple experiments. The accompanying documentation update in the README is clear and comprehensive.
My review focuses on improving the robustness and correctness of the new bash script. I've identified a critical issue with how the script saves progress, which could lead to data loss, and a couple of high-severity issues related to path resolution and input validation that could cause the script to fail unexpectedly. The suggested changes will make the script more reliable and easier to use.
Signed-off-by: Karan Goel <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Karan Goel <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Karan Goel <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Karan Goel <[email protected]>
Signed-off-by: Karan Goel <[email protected]>
Signed-off-by: Karan Goel <[email protected]>
Signed-off-by: Karan Goel <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the improvement!
Signed-off-by: Karan Goel <[email protected]> Signed-off-by: Karan Goel <[email protected]> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Karan Goel <[email protected]> Signed-off-by: Karan Goel <[email protected]> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Karan Goel <[email protected]> Signed-off-by: Karan Goel <[email protected]> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: charlifu <[email protected]>
Signed-off-by: Karan Goel <[email protected]> Signed-off-by: Karan Goel <[email protected]> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: xuebwang-amd <[email protected]>
Signed-off-by: Karan Goel <[email protected]> Signed-off-by: Karan Goel <[email protected]> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Purpose
Make it easy to run multiple auto tune experiments.
Test Plan
Tested locally against the real auto_tune.sh script.
Test Result
Successfully ran 6 back to back auto_tune.sh experiments.
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.