-
Notifications
You must be signed in to change notification settings - Fork 942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LoRA support #206
Comments
apple team, please add it!! |
This repo has some models converted to Core ML after a LoRA was merged into a base model. Not the real thing, but a good bit of it . . . https://huggingface.co/jrrjrr/LoRA-Merged-CoreML-Models Merging was done with the Super Merger extension of Automatic1111. The Core ML conversion included a VAEEncoder for image2image, but not the ControlledUnet for ControlNet use, and they are "bundled for Swift". You could just as easily convert with the ControlledUnet added and/or skip the bundle step for use with a different pipeline. |
We need Lora🥹 |
I have a feeling that SD-XL is capturing everyone's attention right now. LoRA probably won't happen now until SD-XL is all figured out, but that seems to be happening quickly. Hopefully that is out of the way before Sonoma and full ml-s-d 1.0.0 grab the spotlight and LoRA gets bumped again. |
Hey @jrittvo thanks so much for the link! |
Hi again @jrittvo |
The conversion from
If you give it a shot and get stuck, someone at the Mochi Diffusion Discord can help you:
You can also drop a specific request (or requests) at my LoRA-Merged-CoreML-Models repo and I'll run it (or them) for you, usually within a day or two. |
Hello everyone! I just added the option to merge LoRAs before conversion on Guernika Model Converter, basically it takes the LoRAs and merges them using this script by Kohya. |
@GuiyeC that's awesome, thanks! At the moment I stopped experimenting with LoRAs, as it's crucial for us to "hot-swap" them. E.g., have one SD model (~1Gb), and multiple LoRA models (~30Mb), and pick which one to use. Baking LoRAs into the SD model works great for testing, but having multiple heavy models for each LoRA in the project sucks, so I'm still waiting for some info on official LoRA support. |
Is there any progress now? The XL model size has increased, and the demand for Lora has become more urgent. 🥶 |
Hi. I am trying to convert LCM-LoRA applied model, but failing. What I did are,
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm")
pipe.fuse_lora()
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
mkdir -p mlmodels
pipenv run python -m python_coreml_stable_diffusion.torch2coreml \
--model-version runwayml/stable-diffusion-v1-5 \
--attention-implementation ORIGINAL \
--convert-unet \
--convert-text-encoder \
--convert-vae-decoder \
--convert-vae-encoder \
--convert-safety-checker \
--quantize-nbits 6 \
--bundle-resources-for-swift-cli \
-o mlmodels \
#!/bin/zsh
prompt="rabbit on moon, high resolution"
pipenv run python -m python_coreml_stable_diffusion.pipeline \
--model-version runwayml/stable-diffusion-v1-5 \
--scheduler LCM \
--prompt "${prompt}" \
-i mlmodels \
-o . \
--compute-unit ALL \
--seed 42 \
--num-inference-steps 8 The image which I got is something strange. The image which I expect, which is actually generated on Diffusers at the same condition, is below. I wonder that I am missing something, but I have no idea since I am a newbie of Generative AI. |
So, what's the progress in these field? Is there any approach to make Lora models "hot-swapable" rather then pre-baking it? |
Has anyone figured out how to use the newly introduced Multifunction Models? Apple engineers even demonstrated it with a fine-tuned version of SDXL at WWDC 2024. Any guidance or support would be greatly appreciated! |
Are there any plans to support LoRA?
If so, I assume the
.safetensors
file will need to be converted with the model?The text was updated successfully, but these errors were encountered: