Skip to content

LoRa effect is none when inferencing with FluxPipeline.from_pretrained() #9361

Closed
@DimitriosKakouris

Description

@DimitriosKakouris

Ηello, I trained a LoRa with the help of the ostris/ai-toolkit repo, I believe it is based mostly on the kohya_ss repo. The LoRa saved in safetensors format when run with the sample inference code below gave me warnings on most of the LoRa keys and even though it ran fine, the output image was a black image of low size, around 1.3kB:

import torch
from diffusers import DiffusionPipeline,FluxPipeline

pipe=FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16, device_map='balanced')
pipe.load_lora_weights("TheLastBen/The_Hound", weight_name="sandor_clegane_single_layer.safetensors")


prompt="sandor clegane drinking in a pub"

image = pipe(
    prompt=prompt,
    num_inference_steps=30,
    width=1024,
    generator=torch.Generator("cpu").manual_seed(42),
    height=1024,
).images[0]
image.save("./fluxlora/flux-lora.png")

In order to combat the incompatibility of the LoRa keys I used the script provided by the kohya_ss repo that converts his format to diffusers format convert_flux_lora.py the warnings disappeared resulting in an image that followed the prompt but without the face of "sandor clegane". It is like the effect of pipe.load_lora_weights() is non-existent. Can you help as to why this might be happening?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions