Skip to content

Error in generating images: Conv2d object has no attribute delete_adapter #337

@andrewjswan

Description

@andrewjswan

LCM LoRA Error in generating images: Conv2d object has no attribute delete_adapter

{'clip_skip': 1,
 'controlnet': None,
 'diffusion_task': 'text_to_image',
 'dirs': {'controlnet': '/app/controlnet_models', 'lora': '/app/lora_models'},
 'gguf_model': {'clip_path': None,
                'diffusion_path': None,
                'gguf_models': '/app/models/gguf',
                't5xxl_path': None,
                'vae_path': None},
 'guidance_scale': 1,
 'image_height': 512,
 'image_width': 512,
 'inference_steps': 10,
 'init_image': None,
 'lcm_lora': {'base_model_id': 'Fictiverse/Stable_Diffusion_PaperCut_Model',
              'lcm_lora_id': 'latent-consistency/lcm-lora-sdxl'},
 'lcm_model_id': 'SimianLuo/LCM_Dreamshaper_v7',
 'lora': {'enabled': False,
          'fuse': True,
          'models_dir': '/app/lora_models',
          'path': '',
          'weight': 0.5},
 'negative_prompt': '',
 'number_of_images': 1,
 'openvino_lcm_model_id': 'rupeshs/sdxl-turbo-openvino-int8',
 'prompt': 'A very cute, happy and playful Mythological symbol of the zodiac '
           'sign Virgo, looking amazed at the starry sky. whimsical '
           'illustration style, sharp focus, high detail, crisp image',
 'rebuild_pipeline': False,
 'seed': 123123,
 'strength': 0.6,
 'token_merging': 0.0,
 'use_gguf_model': False,
 'use_lcm_lora': True,
 'use_offline_model': False,
 'use_openvino': False,
 'use_safety_checker': False,
 'use_seed': False,
 'use_tiny_auto_encoder': False}
***** Init LCM-LoRA pipeline - Fictiverse/Stable_Diffusion_PaperCut_Model *****

Loading pipeline components...:   0%|          | 0/7 [00:00<?, ?it/s]
Loading pipeline components...:  14%|█▍        | 1/7 [00:00<00:01,  4.52it/s]
Loading pipeline components...:  57%|█████▋    | 4/7 [00:00<00:00, 12.38it/s]
Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 12.32it/s]
Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 11.48it/s]
Error in generating images: 'Conv2d' object has no attribute 'delete_adapter'
Traceback (most recent call last):
  File "/app/env/lib/python3.11/site-packages/diffusers/loaders/peft.py", line 352, in load_lora_adapter
    incompatible_keys = set_peft_model_state_dict(self, state_dict, adapter_name, **peft_kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/env/lib/python3.11/site-packages/peft/utils/save_and_load.py", line 158, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2624, in load_state_dict
    raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
	size mismatch for down_blocks.1.attentions.0.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
	size mismatch for down_blocks.1.attentions.0.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.1.attentions.0.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
	size mismatch for down_blocks.1.attentions.0.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for down_blocks.1.attentions.1.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
	size mismatch for down_blocks.1.attentions.1.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.1.attentions.1.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
	size mismatch for down_blocks.1.attentions.1.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for down_blocks.2.attentions.0.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for down_blocks.2.attentions.0.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.2.attentions.0.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for down_blocks.2.attentions.0.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for down_blocks.2.attentions.1.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for down_blocks.2.attentions.1.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for down_blocks.2.attentions.1.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for down_blocks.2.attentions.1.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.0.resnets.2.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
	size mismatch for up_blocks.0.resnets.2.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
	size mismatch for up_blocks.1.attentions.0.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.1.attentions.0.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.lora_B.lcm.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
	size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.0.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.1.attentions.0.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.attentions.1.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.1.attentions.1.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.lora_B.lcm.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
	size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.1.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.1.attentions.1.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.attentions.2.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.1.attentions.2.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.lora_B.lcm.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
	size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.attentions.2.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.1.attentions.2.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.0.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
	size mismatch for up_blocks.1.resnets.0.conv1.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.0.time_emb_proj.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.resnets.0.conv2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
	size mismatch for up_blocks.1.resnets.0.conv2.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.0.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
	size mismatch for up_blocks.1.resnets.0.conv_shortcut.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.1.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
	size mismatch for up_blocks.1.resnets.1.conv1.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.1.time_emb_proj.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.resnets.1.conv2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
	size mismatch for up_blocks.1.resnets.1.conv2.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.1.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
	size mismatch for up_blocks.1.resnets.1.conv_shortcut.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.2.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1920, 3, 3]).
	size mismatch for up_blocks.1.resnets.2.conv1.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.2.time_emb_proj.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
	size mismatch for up_blocks.1.resnets.2.conv2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
	size mismatch for up_blocks.1.resnets.2.conv2.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.resnets.2.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1920, 1, 1]).
	size mismatch for up_blocks.1.resnets.2.conv_shortcut.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.1.upsamplers.0.conv.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
	size mismatch for up_blocks.1.upsamplers.0.conv.lora_B.lcm.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.0.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1920, 3, 3]).
	size mismatch for up_blocks.2.resnets.0.conv1.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.0.time_emb_proj.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
	size mismatch for up_blocks.2.resnets.0.conv2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
	size mismatch for up_blocks.2.resnets.0.conv2.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.0.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1920, 1, 1]).
	size mismatch for up_blocks.2.resnets.0.conv_shortcut.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.1.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
	size mismatch for up_blocks.2.resnets.1.conv1.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.1.time_emb_proj.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
	size mismatch for up_blocks.2.resnets.1.conv2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
	size mismatch for up_blocks.2.resnets.1.conv2.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.1.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for up_blocks.2.resnets.1.conv_shortcut.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.2.conv1.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 960, 3, 3]).
	size mismatch for up_blocks.2.resnets.2.conv1.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.2.time_emb_proj.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
	size mismatch for up_blocks.2.resnets.2.conv2.lora_A.lcm.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
	size mismatch for up_blocks.2.resnets.2.conv2.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for up_blocks.2.resnets.2.conv_shortcut.lora_A.lcm.weight: copying a param with shape torch.Size([64, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 960, 1, 1]).
	size mismatch for up_blocks.2.resnets.2.conv_shortcut.lora_B.lcm.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
	size mismatch for mid_block.attentions.0.proj_in.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for mid_block.attentions.0.proj_in.lora_B.lcm.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
	size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.lcm.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
	size mismatch for mid_block.attentions.0.proj_out.lora_A.lcm.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
	size mismatch for mid_block.attentions.0.proj_out.lora_B.lcm.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/src/context.py", line 57, in generate_text_to_image
    self.lcm_text_to_image.init(
  File "/app/src/backend/lcm_text_to_image.py", line 283, in init
    self.pipeline = get_lcm_lora_pipeline(
                    ^^^^^^^^^^^^^^^^^^^^^^
  File "/app/src/backend/pipelines/lcm_lora.py", line 89, in get_lcm_lora_pipeline
    load_lcm_weights(
  File "/app/src/backend/pipelines/lcm_lora.py", line 33, in load_lcm_weights
    pipeline.load_lora_weights(
  File "/app/env/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py", line 202, in load_lora_weights
    self.load_lora_into_unet(
  File "/app/env/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py", line 406, in load_lora_into_unet
    unet.load_lora_adapter(
  File "/app/env/lib/python3.11/site-packages/diffusers/loaders/peft.py", line 377, in load_lora_adapter
    module.delete_adapter(adapter_name)
    ^^^^^^^^^^^^^^^^^^^^^
  File "/app/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1962, in __getattr__
    raise AttributeError(
AttributeError: 'Conv2d' object has no attribute 'delete_adapter'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions