-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPS: Cannot add LoRA to Unet (LoftQ) #1575
Comments
actually i'm not sure even if it works with MPS, once that's set up due to bitsandbytes-foundation/bitsandbytes#252 maybe just blocking the usage on mps systems is needed until the plumbing is there for it |
Hmm bitsandbytes-foundation/bitsandbytes#252 is probably the reason why we default to |
I don't have a MB to test this but it looks indeed like it wouldn't work. Note that you can download some LoftQ-initialized models from the hub if you don't want to run this process yourself, but of course using them still requires bnb to work. In general, I think it's a good reminder for us to be more careful about hard-coding the device in our code, as the number of viable hardware accelerators continues to climb. |
for simpletuner my workaround is to use gaussian init instead of loftq on mps boxes. obviously it would be nice to have feature parity across the board, but we just ain't there yet |
Hi @bghira |
kinda sad to put a dependency on such a non-portable library, makes the feature nvidia-only. |
System Info
peft v0.9.0
accelerate v0.26.1
apple m3 max
Who can help?
@sayakpaul
Information
Tasks
examples
folderReproduction
Expected behavior
The above code defaults to "cuda", but it should consider whether mps is available first, instead.
The text was updated successfully, but these errors were encountered: