-
Notifications
You must be signed in to change notification settings - Fork 942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to use local models? #35
Comments
I was able to use a local model that I cloned from HuggingFace by feeding the local URL to the model folder in place of the
It seems like the converter is looking for a |
@atiorh Please update this whenever possible. Thank you.. |
Has anyone managed to convert a model from .ckpt file? I tried this as suggested by @brandonkoch3
But got this error: |
alelordelo, it should be link to a folder, not a file |
Local doesn't seem to work for me -- wondering maybe miniconda3 is not supported and need the larger package? Also seems to except to find a model_index.json file when I just have models in .ckpt and .safetensors format in my Stable-diffusion/convert folder.
|
Where did your model come from, @radfaraf? To your point, it definitely seems like the CoreML tools are looking for a folder structure largely reminiscent of what you'd see on the official repos for Stable Diffusion, like the Stable Diffusion 2.1 main repo. That includes a I'm still trying to piece this together myself, but wondering if your .ckpt is something you fine-tuned yourself, or where you got it from? Most likely, the models we have that are just .ckpt files are based directly on "official" models that came from StabilityAI, so I wonder if we can piece together the folder structure from official models and get this running (I tried this on a model I fine-tuned using Dreambooth, and almost got it to work, but ran into a conversion error when using the CoreML convert tools related to what I think are floating point value errors, so I'm trying to re-train using FP32 vs. FP16 and seeing if that works). I'll keep you updated, but curious if you can share more on the model you're trying to convert. |
Here are the ones I tried: https://civitai.com/models/1254/elldreths-dream-mix |
I found someone had a model_index.json file for one of those here: https://huggingface.co/johnslegers/elldrethsdream/tree/main Put it in the same folder as the model. Now it just wants another json file: |
thanks @pedx78 , but why I have a custom .ckpt that I trained on Dreambooth Google Colab for ex? |
My general understanding (which is very cursory) is that the CoreML tools
are looking for more than just the .ckpt file, but are looking for the
entire folder structure (the unet folder, vae folder, text_encoder folder),
etc.
When you use Dreambooth (depending on which Google Colab you’re using,
there should be a way to upload the entire session to HuggingFace, rather
than just downloading the .ckpt). My thinking is that, if you do that, and
then use the path to the HuggingFace repo with the CoreML tools, it would
work. Your theory of using the folder structure from the “official” repo
and replacing the .ckpt might work, but I don’t know that it will.
I’m still testing my above suggestion, which is the closest I’ve gotten to
using Dreambooth on Google Colab, copying the session to HuggingFace, and
converting to CoreML. I’m currently trying a Dreambooth session using fp32
instead of fp16, which I came across as a need when I got a conversion
error when converting the text_encoder to CoreML. Maybe almost there!
…On Thursday, December 29, 2022, Alexandre Lordelo ***@***.***> wrote:
alelordelo, it should be link to a folder, not a file
/Users/Downloads/model_folder
thanks @pedx78 <https://github.com/pedx78> , but why I have a custom
.ckpt that I trained on Dreambooth Google Colab for ex?
Should I donwload a Stable Diffusion model and replace the original .ckpt
for the custom Dreambooth .ckpt?
—
Reply to this email directly, view it on GitHub
<#35 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANZ5EY7GP6NXNSVH5EO4PR3WPW6OXANCNFSM6AAAAAASTMSXCA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I've been working on converting |
Great work! This seems to work really well to create the folder structure that CoreML needs for a .ckpt to be able to be converted. I'm able to do the conversion successfully, which is awesome. That said, when I take these steps and try to use the StableDiffusion package, I end up with a completely black image. I assume this is the funkiness you're seeing with the generated image? |
I'm able to generate the images and they look similar to how it should look like but the image quality is poor. Perhaps using a different scheduler such as Euler might help. |
Using this script - https://github.com/Sunbread/Ckpt2Diff However image output was blurry, discolored. I dont know much about the .ckpt file structure and tooling, very new to this. But I found out that any embeddings or manipulations added to model changes the tensor structure. - I am thinking of this flow
@brandonkoch3 Your Dreambooth flow looks interesting. May give it a try on weekend. |
what if you use the workflow suggested by Apple, and replace the stable diffusion folder model with the custom .ckpt file? |
@alelordelo If you want to convert a model to mps from a .ckpt file, you can do so following these steps: Step One: Example usage: Ideally, you will want to use the same inference.yaml as the one you used to train the model. If in doubt, you can find inference.yaml files for v1 and v2 models here respectively: Step Two
Inference |
Thanks for this great write-up. This is the first I've seen showing the steps going from something that might have been fine-tuned on Dreambooth to usable directly in Python via CoreML. I took these steps (the only modification I made was having to use the This is the same issue I'm having been when converting a Dreambooth-trained model, or a downloaded .ckpt (and then converted to CoreML using the steps outlined @godly-devotion above) to CoreML, where the model does load, but results in generating abnormal/blank images. |
Hello everyone! I have created a small app Guernika Model Converter to convert models into CoreML compatible models from existing local models and CKPT files. I have been able to convert DreamBooth models and HuggingFace models that were giving me problems when loading from the identifier. This app is essentially a pyinstaller wrapper of modified scripts with a tkinter UI. You can also check the modified scripts on the scripts.zip on the same repo. |
I tried the converter on two different .ckpt files I used that work fine in other software that I found online and it gives an error about 20 minutes in. I posted more info here: https://huggingface.co/Guernika/CoreMLStableDiffusion/discussions/2 |
@radfaraf and I were able to find the problem, it looks like Xcode is mandatory in order to convert models, once installed he was able to successfully convert both .ckpt files. Thanks again for the help debugging this 🙏 |
same occurs to me! Then when i do While i try This is strange! after some research, I found all files download from huggingface is stored in Here are my questions:
|
Hi @tomy128 I'm not familiar with the ml-stable-diffusion project but if you are wondering how the HF cache is working, here is a piece of documentation you can read: https://huggingface.co/docs/huggingface_hub/guides/manage-cache. If you want to properly download models to the cache, you can use the (disclaimer: I'm a maintainer of |
Thanks for your reply. I just read the documentation you provider, it makes me know more about HF cache stuff. But it doesn't answer my questions, maybe I need go deeper into the ml-stable-diffusion code. tks again! :-) |
No description provided.
The text was updated successfully, but these errors were encountered: