Skip to content

LoRA Still Influencing Output Despite Setting "scale" to 0 #4751

Closed
@Linaqruf

Description

@Linaqruf

Hi, thanks and great job for implementing Kohya LoRA format into diffusers. However, I'm running into an issue where the LoRA settings don't seem to be working as described in the documentation.

According to the official documentation, setting the scale value in cross_attention_kwargs = {'scale' : 0} should completely nullify the effect of LoRA. The model should rely only on its base weights.

💡 A scale value of 0 is the same as not using your LoRA weights and you’re only using the base model weights, and a scale value of 1 means you’re only using the fully finetuned LoRA weights. Values between 0 and 1 interpolates between the two weights.

So I thought this is interpolation weight merging, similar to Kohya, Auto1111 or ComfyUI implementation. However In my case, even after setting cross_attention_kwargs = {'scale' : 0}, I've noticed that LoRA still appears to influence the results. This seems contradictory to what the documentation states.

Description Image
Scale 1 scale 1
Scale 0.5 scale 0.5
Scale 0 scale 0
No Lora no lora

Is this a known issue, or am I possibly misinterpreting how to set up the feature correctly? I'd appreciate any guidance or clarification.

I wrote a notebook to reproduce this issue here
https://colab.research.google.com/drive/1P_PVUoyrPzb15EK9G1FFZIH3YnpwnPaC

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions