Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoRA support #206

Open
SaladDays831 opened this issue Jul 11, 2023 · 13 comments
Open

LoRA support #206

SaladDays831 opened this issue Jul 11, 2023 · 13 comments

Comments

@SaladDays831
Copy link

Are there any plans to support LoRA?
If so, I assume the .safetensors file will need to be converted with the model?

@treksis
Copy link

treksis commented Jul 23, 2023

apple team, please add it!!

@jrittvo
Copy link

jrittvo commented Jul 23, 2023

This repo has some models converted to Core ML after a LoRA was merged into a base model. Not the real thing, but a good bit of it . . .

https://huggingface.co/jrrjrr/LoRA-Merged-CoreML-Models

Merging was done with the Super Merger extension of Automatic1111. The Core ML conversion included a VAEEncoder for image2image, but not the ControlledUnet for ControlNet use, and they are "bundled for Swift". You could just as easily convert with the ControlledUnet added and/or skip the bundle step for use with a different pipeline.

@jiangdi0924
Copy link
Contributor

jiangdi0924 commented Jul 24, 2023

We need Lora🥹

@jrittvo
Copy link

jrittvo commented Jul 24, 2023

I have a feeling that SD-XL is capturing everyone's attention right now. LoRA probably won't happen now until SD-XL is all figured out, but that seems to be happening quickly. Hopefully that is out of the way before Sonoma and full ml-s-d 1.0.0 grab the spotlight and LoRA gets bumped again.

@SaladDays831
Copy link
Author

Hey @jrittvo thanks so much for the link!
Didn't know we could do that with LoRAs, gonna test a couple of your models now

@SaladDays831
Copy link
Author

Hi again @jrittvo
The models work great, thanks again for this insight.
Would you mind sharing some info on how you converted the output .safetensors model (from the SuperMerger extension) to CoreML? I assume just uploading the .safetensors file to HuggingFace and using it with the command from "Converting Models to Core ML" won't work as it needs the unet, scheduler, and other folders.

@jrittvo
Copy link

jrittvo commented Jul 31, 2023

The conversion from .safetensors (or .ckpt) to CoreML is pretty straightforward, once you get the environment to do it all set up. Getting it set up is not that straightforward, unfortunately. There is a good guide here:

https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-Stable-Diffusion-models-to-Core-ML

If you give it a shot and get stuck, someone at the Mochi Diffusion Discord can help you:

https://discord.gg/x2kartzxGv

You can also drop a specific request (or requests) at my LoRA-Merged-CoreML-Models repo and I'll run it (or them) for you, usually within a day or two.

@GuiyeC
Copy link
Contributor

GuiyeC commented Aug 12, 2023

Hello everyone! I just added the option to merge LoRAs before conversion on Guernika Model Converter, basically it takes the LoRAs and merges them using this script by Kohya.

@SaladDays831
Copy link
Author

@GuiyeC that's awesome, thanks!

At the moment I stopped experimenting with LoRAs, as it's crucial for us to "hot-swap" them. E.g., have one SD model (~1Gb), and multiple LoRA models (~30Mb), and pick which one to use. Baking LoRAs into the SD model works great for testing, but having multiple heavy models for each LoRA in the project sucks, so I'm still waiting for some info on official LoRA support.

@jiangdi0924
Copy link
Contributor

Is there any progress now? The XL model size has increased, and the demand for Lora has become more urgent. 🥶

@y-ich
Copy link

y-ich commented Dec 25, 2023

Hi.

I am trying to convert LCM-LoRA applied model, but failing.
Could someone advice me?

What I did are,

  1. added the code below in get_pipeline function in torch2coreml.py. (You also need adding import LCMScheduler)
    pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm")
    pipe.fuse_lora()
    pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
  1. converted a model by the script below.
mkdir -p mlmodels
pipenv run python -m python_coreml_stable_diffusion.torch2coreml \
    --model-version runwayml/stable-diffusion-v1-5 \
    --attention-implementation ORIGINAL \
    --convert-unet \
    --convert-text-encoder \
    --convert-vae-decoder \
    --convert-vae-encoder \
    --convert-safety-checker \
    --quantize-nbits 6 \
    --bundle-resources-for-swift-cli \
    -o mlmodels \
  1. adding codes for LCMScheduler options in pipeline.py.

  2. generate an image by the script below

#!/bin/zsh

prompt="rabbit on moon, high resolution"

pipenv run python -m python_coreml_stable_diffusion.pipeline \
    --model-version runwayml/stable-diffusion-v1-5 \
    --scheduler LCM \
    --prompt "${prompt}" \
    -i mlmodels \
    -o . \
    --compute-unit ALL \
    --seed 42 \
    --num-inference-steps 8

The image which I got is something strange.

randomSeed_42_computeUnit_ALL_modelVersion_runwayml_stable-diffusion-v1-5_customScheduler_LCM_numInferenceSteps8

The image which I expect, which is actually generated on Diffusers at the same condition, is below.

foo

I wonder that I am missing something, but I have no idea since I am a newbie of Generative AI.
Give me advices, please!
Thanks.

@asfuon
Copy link

asfuon commented Aug 21, 2024

So, what's the progress in these field? Is there any approach to make Lora models "hot-swapable" rather then pre-baking it?

@jb-apps
Copy link

jb-apps commented Oct 9, 2024

Has anyone figured out how to use the newly introduced Multifunction Models? Apple engineers even demonstrated it with a fine-tuned version of SDXL at WWDC 2024.

Any guidance or support would be greatly appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants