Description
I am using the Vulkan backend because ROCm isn't an option for gfx1103
This program is very good because i don't have to deal with the python bs
but so far i only have been able to make a single LoRa model work once, i have not been able to make it work again
its completely inconsistent, its not related to size or model, it happens to SD 1.5 and SDXL
from 10 MB models to 300 MB models
i don't believe its ram because i also tried some big checkpoint models, i tried using CPU mode to only use ram i tried tweaking all the settings but LoRa's simply don't work
i believe i am doing it right <lora:model_name:0.6>
but it just crashes, some times it gives a reason, some times it doesn't
so i don't really know what i need to do to get LoRa's working
update - its definitely not ram
so, i decided to start from 0 and made some discoveries
i was able to use lora even on SDXL models with a basic run
so it was probably some other options causing the crash
and with all my tests i found out that this program is extremely finicky
got similar results on both SD1.5 and SDXL and turbo models
sampling method can cause a crash (this one makes sense)
steps can cause a crash (tested on a turbo model with default 8 steps then 20 steps, for some reason it didn't crash with 20 steps)
lora combination, if the lora models can work together i will work perfectly, else everything crashes
for example a SD 1.5 model, some sd 1.5 models will always crash, some will work but alone, some will work with other models
so far, prompt size has not resulted in a crash, i tried a very massive prompt and it worked perfectly well