-
Notifications
You must be signed in to change notification settings - Fork 6.2k
[lora] adapt new LoRA config injection method #11999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
setup.py
Outdated
@@ -117,6 +117,7 @@ | |||
"numpy", | |||
"parameterized", | |||
"peft>=0.15.0", | |||
# "peft>=0.16.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be changed when peft
has the release.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Now that PEFT 0.17.0 is out, we should be good to review this PR. Gentle ping @BenjaminBossan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from my side, thanks.
@DN6 could you give this a check please? This solves some existing issues around complicated logic of deriving some |
What does this PR do?
Fixes #11874
Relies on huggingface/peft#2637
Supercedes #11911
TODO: Run some integration tests before merging.
Edit: Have run the integration tests for SDXL and Flux and they work.