-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Misc] Allow LoRA to adaptively increase rank and remove possible_max_ranks #10623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: JinhyunBang <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@mgoin do you know why the LoRA sizes are fixed like that? Is it because we have compiled ops for this? cc @jeejeelee |
I'm curious whether the changes are compatible with CUDA graphs. Have you tested this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it would be nice to support larger ranks if the user needs it, but this should maybe be a set argument. I'm not sure if this will work as-is, so I need to see some testing added
This pull request has merge conflicts that must be resolved before it can be |
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you! |
This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you! |
Related Issues: #2847, #3310, #3934