-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Closed
Description
Hello, I noticed that the recent version only trains MLP in Text Encoders, whereas existing LoRAs or LoRAs trained with GUI version of kohya-ss (that uses older version) seem to train all layers. Is it a mistake on my side? I couldn't find any option to control it.
This is what usually gets trained:
lora_te1_text_model_encoder_layers_0_mlp_fc1.alpha
lora_te1_text_model_encoder_layers_0_mlp_fc1.lora_down.weight
lora_te1_text_model_encoder_layers_0_mlp_fc1.lora_up.weight
lora_te1_text_model_encoder_layers_0_mlp_fc2.alpha
lora_te1_text_model_encoder_layers_0_mlp_fc2.lora_down.weight
lora_te1_text_model_encoder_layers_0_mlp_fc2.lora_up.weight
lora_te1_text_model_encoder_layers_0_self_attn_k_proj.alpha
lora_te1_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight
lora_te1_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight
lora_te1_text_model_encoder_layers_0_self_attn_out_proj.alpha
lora_te1_text_model_encoder_layers_0_self_attn_out_proj.lora_down.weight
lora_te1_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight
lora_te1_text_model_encoder_layers_0_self_attn_q_proj.alpha
lora_te1_text_model_encoder_layers_0_self_attn_q_proj.lora_down.weight
lora_te1_text_model_encoder_layers_0_self_attn_q_proj.lora_up.weight
lora_te1_text_model_encoder_layers_0_self_attn_v_proj.alpha
lora_te1_text_model_encoder_layers_0_self_attn_v_proj.lora_down.weight
lora_te1_text_model_encoder_layers_0_self_attn_v_proj.lora_up.weight
This is what I see in my attempts to use newest version of sdxl_train_network.py:
lora_te1_text_model_encoder_layers_0_mlp_fc1.alpha
lora_te1_text_model_encoder_layers_0_mlp_fc1.lora_down.weight
lora_te1_text_model_encoder_layers_0_mlp_fc1.lora_up.weight
lora_te1_text_model_encoder_layers_0_mlp_fc2.alpha
lora_te1_text_model_encoder_layers_0_mlp_fc2.lora_down.weight
lora_te1_text_model_encoder_layers_0_mlp_fc2.lora_up.weight
Metadata
Metadata
Assignees
Labels
No labels