-
Notifications
You must be signed in to change notification settings - Fork 294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong model type when using qauntized t5xxl,clip and vae #374
Comments
you mean this?
this will display different things depending on model and backend that you are using. Also, it is always helpful to share the command you used :) |
./bin/sd --diffusion-model /root/stable-diffusion.cpp/build/models/flux1-schnell-q2_k.gguf --cfg-scale 1 --steps 1 --seed 0 --sampling-method euler -H 320 -W 320 -p "Jedi cat holding a light saber, cyberpunk sci-fi,16k resolution, sharp focus, hd" --vae /root/stable-diffusion.cpp/build/vae/ae-f16.gguf --vae-on-cpu --threads 8 --clip_l /root/stable-diffusion.cpp/build/models/clip_l-q8_0.gguf --t5xxl /root/stable-diffusion.cpp/build/models/t5xxl_q2_k.gguf --clip-on-cpu I used this |
on cuda i get:
with btw, you seems to be trying very hard |
Are you using the latest sd.cpp? |
Yea I think I am |
When I put q3 t5xxl and clip the ccp says it's f16 and vae f32 these are wrong which causes termux to crash. Flux model is shown correctly if I use flux q2 it shows q2 please fix.
Another issue: the flux prompt coherence is horrible I use flux q3 and I put jedi cat but it just gave me the word cat.s and when I put pretty woman holding a rose,it just showed a picture of a rose. Is this because I'm using q3 t5xxl? I can't use fp8 t5 it crashes termux but it works fine in comfyui we need more memory optimization like split sigmas or split attention optimization
Update: I tried it again this time it got flux and clip models correctly but t5xxl it says q8_0 when actually it's q2_k
The text was updated successfully, but these errors were encountered: