Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: could not create a primitive descriptor for a matmul primitive when using a negative embedding with --use-ipex #14224

Closed
1 task done
brpack1968 opened this issue Dec 6, 2023 · 19 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@brpack1968
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Running with --use-ipex command line argument on an ARC A750 GPU

Any negative imbedding in the negative prompt block will cause image generation to fail with
RuntimeError: could not create a primitive descriptor for a matmul primitive
Removing the negative imbedding will allow image generation to proceed normally.

Steps to reproduce the problem

  1. Install via RC or dev branch
  2. Run webui-user with --use-ipex command line argument
  3. Add a negative embedding to the negative prompt block
  4. Press "generate"
  5. Generation fails, remove negative embedding from negative prompt block
  6. Press "generate"
  7. Image generation should proceed normally

What should have happened?

Image generation using negative embeddings should work normally.

Sysinfo

{
"Platform": "Windows-10-10.0.22631-SP0",
"Python": "3.10.6",
"Version": "v1.7.0-RC-4-g120a84bd",
"Commit": "120a84bd2f01ec4489bd12bd68f319798ef30782",
"Script path": "C:\Users\brpac\stable-diffusion-webui",
"Data path": "C:\Users\brpac\stable-diffusion-webui",
"Extensions dir": "C:\Users\brpac\stable-diffusion-webui\extensions",
"Checksum": "f8bd73f3ba335f395744d94822e072c68068966f9351e0329b9f4fe64ee065f8",
"Commandline": [
"launch.py",
"--use-ipex"
],
"Torch env info": {
"torch_version": "2.0.0",
"is_debug_build": "False",
"cuda_compiled_version": null,
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 11 Pro",
"libc_version": "N/A",
"python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.22631-SP0",
"is_cuda_available": "False",
"cuda_runtime_version": null,
"cuda_module_loading": "N/A",
"nvidia_driver_version": null,
"nvidia_gpu_models": null,
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"intel-extension-for-pytorch==2.0.110+gitc6ea20b",
"numpy==1.23.5",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.0.0a0+gite9ebda2",
"torchdiffeq==0.2.3",
"torchmetrics==1.2.1",
"torchsde==0.2.6",
"torchvision==0.15.2a0+fa99a53"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3301",
"DeviceID=CPU0",
"Family=107",
"L2CacheSize=3072",
"L2CacheSpeed=",
"Manufacturer=AuthenticAMD",
"MaxClockSpeed=3301",
"Name=AMD Ryzen 5 5600X3D 6-Core Processor ",
"ProcessorType=3",
"Revision=8450"
]
},
"Exceptions": [
{
"exception": "could not create a primitive descriptor for a matmul primitive",
"traceback": [
[
"C:\Users\brpac\stable-diffusion-webui\modules\call_queue.py, line 57, f",
"res = list(func(*args, **kwargs))"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\call_queue.py, line 36, f",
"res = func(*args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\txt2img.py, line 55, txt2img",
"processed = processing.process_images(p)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\processing.py, line 734, process_images",
"res = process_images_inner(p)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\processing.py, line 857, process_images_inner",
"p.setup_conds()"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\processing.py, line 1308, setup_conds",
"super().setup_conds()"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\processing.py, line 469, setup_conds",
"self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\processing.py, line 455, get_conds_with_caching",
"cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\prompt_parser.py, line 188, get_learned_conditioning",
"conds = model.get_learned_conditioning(texts)"
],
[
"C:\Users\brpac\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py, line 669, get_learned_conditioning",
"c = self.cond_stage_model(c)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\sd_hijack_clip.py, line 234, forward",
"z = self.process_tokens(tokens, multipliers)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\sd_hijack_clip.py, line 273, process_tokens",
"z = self.encode_with_transformers(tokens)"
],
[
"C:\Users\brpac\stable-diffusion-webui\modules\sd_hijack_clip.py, line 326, encode_with_transformers",
"outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py, line 822, forward",
"return self.text_model("
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py, line 740, forward",
"encoder_outputs = self.encoder("
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py, line 654, forward",
"layer_outputs = encoder_layer("
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py, line 383, forward",
"hidden_states, attn_weights = self.self_attn("
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1501, _call_impl",
"return forward_call(args, **kwargs)"
],
[
"C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py, line 322, forward",
"attn_output = torch.bmm(attn_probs, value_states)"
]
]
}
],
"CPU": {
"model": "AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD",
"count logical": 12,
"count physical": 6
},
"RAM": {
"total": "32GB",
"used": "14GB",
"free": "18GB"
},
"Extensions": [],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": " --use-ipex",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000,
"img_max_size_mp": 200,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": true,
"notification_volume": 100,
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/txt2img-images",
"outdir_img2img_samples": "outputs/img2img-images",
"outdir_extras_samples": "outputs/extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/txt2img-grids",
"outdir_img2img_grids": "outputs/img2img-grids",
"outdir_save": "log/images",
"outdir_init_images": "outputs/init-images",
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"upscaler_for_img2img": null,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"auto_launch_browser": "Local",
"enable_console_prompts": false,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120,
"sd_model_checkpoint": "v1-5-pruned-emaonly.safetensors [6ce0161689]",
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"enable_emphasis": true,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": false,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"sdxl_crop_top": 0,
"sdxl_crop_left": 0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "Automatic",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_extra_noise": 0.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0.0,
"token_merging_ratio": 0.0,
"token_merging_ratio_img2img": 0.0,
"token_merging_ratio_hr": 0.0,
"pad_cond_uncond": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1.0,
"extra_networks_card_width": 0,
"extra_networks_card_height": 0,
"extra_networks_card_text_scale": 1.0,
"extra_networks_card_show_desc": true,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\/!?%^
;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"disable_token_counters": false,
"return_grid": true,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250,
"gallery_height": "",
"compact_prompt_box": false,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint"
],
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": false,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000,
"live_preview_fast_interrupt": false,
"hide_samplers": [],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_tmax": 0.0,
"s_noise": 1.0,
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"disabled_extensions": [],
"disable_all_extensions": "none",
"restore_config_state_file": "",
"sd_checkpoint_hash": "6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa",
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"lora_functional": false,
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
]
},
"Startup": {
"total": 7.755524635314941,
"records": {
"initial startup": 0.04003763198852539,
"prepare environment/checks": 0.03553628921508789,
"prepare environment/git version info": 0.07987260818481445,
"prepare environment/torch GPU test": 0.003002643585205078,
"prepare environment/clone repositores": 0.38505101203918457,
"prepare environment/run extensions installers": 0.0020017623901367188,
"prepare environment": 0.5309879779815674,
"launcher": 0.0010020732879638672,
"import torch": 2.348388910293579,
"import gradio": 0.8258023262023926,
"setup paths": 0.6720516681671143,
"import ldm": 0.004003763198852539,
"import sgm": 0.0,
"initialize shared": 0.9069221019744873,
"other imports": 0.5267219543457031,
"opts onchange": 0.0010004043579101562,
"setup SD model": 0.0020029544830322266,
"setup codeformer": 0.08307623863220215,
"setup gfpgan": 0.014514923095703125,
"set samplers": 0.0,
"list extensions": 0.002001523971557617,
"restore config state file": 0.0,
"list SD models": 0.002002716064453125,
"list localizations": 0.0010008811950683594,
"load scripts/custom_code.py": 0.0020017623901367188,
"load scripts/img2imgalt.py": 0.0,
"load scripts/loopback.py": 0.001001119613647461,
"load scripts/outpainting_mk_2.py": 0.0,
"load scripts/poor_mans_outpainting.py": 0.0,
"load scripts/postprocessing_caption.py": 0.0,
"load scripts/postprocessing_codeformer.py": 0.0,
"load scripts/postprocessing_create_flipped_copies.py": 0.0,
"load scripts/postprocessing_focal_crop.py": 0.0010008811950683594,
"load scripts/postprocessing_gfpgan.py": 0.0,
"load scripts/postprocessing_split_oversized.py": 0.0,
"load scripts/postprocessing_upscale.py": 0.0010008811950683594,
"load scripts/processing_autosized_crop.py": 0.0,
"load scripts/prompt_matrix.py": 0.0,
"load scripts/prompts_from_file.py": 0.0,
"load scripts/sd_upscale.py": 0.0,
"load scripts/xyz_grid.py": 0.0020012855529785156,
"load scripts/ldsr_model.py": 0.85965895652771,
"load scripts/lora_script.py": 0.08558416366577148,
"load scripts/scunet_model.py": 0.01601386070251465,
"load scripts/swinir_model.py": 0.014012813568115234,
"load scripts/hotkey_config.py": 0.0,
"load scripts/extra_options_section.py": 0.0,
"load scripts/hypertile_script.py": 0.02778482437133789,
"load scripts/hypertile_xyz.py": 0.0010166168212890625,
"load scripts/refiner.py": 0.0,
"load scripts/seed.py": 0.0,
"load scripts": 1.0110771656036377,
"load upscalers": 0.007062196731567383,
"refresh VAE": 0.0010004043579101562,
"refresh textual inversion templates": 0.0,
"scripts list_optimizers": 0.0010018348693847656,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.001001596450805664,
"initialize extra networks": 0.013011455535888672,
"scripts before_ui_callback": 0.001001119613647461,
"create ui": 0.45417308807373047,
"gradio launch": 0.32617712020874023,
"add APIs": 0.005025148391723633,
"app_started_callback/lora_script.py": 0.0,
"app_started_callback": 0.0
}
},
"Packages": [
"absl-py==2.0.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.1",
"aiosignal==1.3.1",
"altair==5.2.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==23.1.0",
"basicsr==1.4.2",
"beautifulsoup4==4.12.2",
"blendmodes==2022",
"cachetools==5.3.2",
"certifi==2023.11.17",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"colorama==0.4.6",
"contourpy==1.2.0",
"cycler==0.12.1",
"deprecation==2.1.0",
"einops==0.4.1",
"exceptiongroup==1.2.0",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.1",
"filelock==3.13.1",
"filterpy==1.4.5",
"fonttools==4.46.0",
"frozenlist==1.4.0",
"fsspec==2023.12.1",
"ftfy==6.1.3",
"future==0.18.3",
"gdown==4.7.1",
"gfpgan==1.3.8",
"gitdb==4.0.11",
"gitpython==3.1.32",
"google-auth-oauthlib==1.1.0",
"google-auth==2.25.1",
"gradio-client==0.5.0",
"gradio==3.41.2",
"grpcio==1.59.3",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.19.4",
"idna==3.6",
"imageio==2.33.0",
"importlib-metadata==7.0.0",
"importlib-resources==6.1.1",
"inflection==0.5.1",
"intel-extension-for-pytorch==2.0.110+gitc6ea20b",
"jinja2==3.1.2",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.11.2",
"jsonschema==4.20.0",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.3",
"lightning-utilities==0.10.0",
"llvmlite==0.41.1",
"lmdb==1.4.1",
"lpips==0.1.4",
"markdown==3.5.1",
"markupsafe==2.1.3",
"matplotlib==3.8.2",
"mpmath==1.3.0",
"multidict==6.0.4",
"networkx==3.2.1",
"numba==0.58.1",
"numpy==1.23.5",
"oauthlib==3.2.2",
"omegaconf==2.2.3",
"open-clip-torch==2.20.0",
"opencv-python==4.8.1.78",
"orjson==3.9.10",
"packaging==23.2",
"pandas==2.1.3",
"piexif==1.1.3",
"pillow==9.5.0",
"pip==22.2.1",
"platformdirs==4.1.0",
"protobuf==3.20.0",
"psutil==5.9.5",
"pyasn1-modules==0.3.0",
"pyasn1==0.5.1",
"pydantic==1.10.13",
"pydub==0.25.1",
"pyparsing==3.1.1",
"pysocks==1.7.1",
"python-dateutil==2.8.2",
"python-multipart==0.0.6",
"pytorch-lightning==1.9.4",
"pytz==2023.3.post1",
"pywavelets==1.5.0",
"pyyaml==6.0.1",
"realesrgan==0.3.0",
"referencing==0.31.1",
"regex==2023.10.3",
"requests-oauthlib==1.3.1",
"requests==2.31.0",
"resize-right==0.0.2",
"rpds-py==0.13.2",
"rsa==4.9",
"safetensors==0.3.1",
"scikit-image==0.21.0",
"scipy==1.11.4",
"semantic-version==2.10.0",
"sentencepiece==0.1.99",
"setuptools==63.2.0",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.0",
"soupsieve==2.5",
"starlette==0.26.1",
"sympy==1.12",
"tb-nightly==2.16.0a20231203",
"tensorboard-data-server==0.7.2",
"tf-keras-nightly==2.16.0.dev2023120510",
"tifffile==2023.9.26",
"timm==0.9.2",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.1",
"toolz==0.12.0",
"torch==2.0.0a0+gite9ebda2",
"torchdiffeq==0.2.3",
"torchmetrics==1.2.1",
"torchsde==0.2.6",
"torchvision==0.15.2a0+fa99a53",
"tqdm==4.66.1",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing-extensions==4.8.0",
"tzdata==2023.3",
"urllib3==2.1.0",
"uvicorn==0.24.0.post1",
"wcwidth==0.2.12",
"websockets==11.0.3",
"werkzeug==3.0.1",
"yapf==0.40.2",
"yarl==1.9.3",
"zipp==3.17.0"
]
}

What browsers do you use to access the UI ?

Google Chrome

Console logs

C:\Users\brpac\stable-diffusion-webui>webui-user.bat
Already up to date.
venv "C:\Users\brpac\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.7.0-RC-4-g120a84bd
Commit hash: 120a84bd2f01ec4489bd12bd68f319798ef30782
Launching Web UI with arguments: --use-ipex
C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
no module 'xformers'. Processing without...
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 2.0.0a0+gite9ebda2. You might want to consider upgrading.
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Style database not found: C:\Users\brpac\stable-diffusion-webui\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.0.0a0+gite9ebda2.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
Loading weights [636fe404e3] from C:\Users\brpac\stable-diffusion-webui\models\Stable-diffusion\Deliberate_v5.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: C:\Users\brpac\stable-diffusion-webui\configs\v1-inference.yaml
Startup time: 7.8s (prepare environment: 0.5s, import torch: 2.3s, import gradio: 0.8s, setup paths: 0.7s, initialize shared: 0.9s, other imports: 0.5s, load scripts: 1.0s, create ui: 0.5s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 28.6s (load weights from disk: 0.7s, create model: 0.3s, apply weights to model: 4.0s, move model to device: 0.1s, load textual inversion embeddings: 19.8s, calculate empty prompt: 3.6s).
Reusing loaded model Deliberate_v5.safetensors [636fe404e3] to load v1-5-pruned-emaonly.safetensors
Calculating sha256 for C:\Users\brpac\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: 6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from C:\Users\brpac\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Applying attention optimization: InvokeAI... done.
Weights loaded in 22.2s (send model to cpu: 15.0s, calculate hash: 6.0s, load weights from disk: 0.2s, apply weights to model: 0.4s, move model to device: 0.6s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:12<00:00,  1.57it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00,  2.67it/s]
*** Error completing request███████████████████████████████████████████████████████████| 20/20 [00:07<00:00,  5.07it/s]
*** Arguments: ('task(0dm5h5nj1e1b6j7)', 'masterpiece, best quality', 'lowres, bad-artist-anime', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001779F47EC20>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\brpac\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\brpac\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\brpac\stable-diffusion-webui\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "C:\Users\brpac\stable-diffusion-webui\modules\processing.py", line 857, in process_images_inner
        p.setup_conds()
      File "C:\Users\brpac\stable-diffusion-webui\modules\processing.py", line 1308, in setup_conds
        super().setup_conds()
      File "C:\Users\brpac\stable-diffusion-webui\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "C:\Users\brpac\stable-diffusion-webui\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "C:\Users\brpac\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "C:\Users\brpac\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "C:\Users\brpac\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "C:\Users\brpac\stable-diffusion-webui\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 383, in forward
        hidden_states, attn_weights = self.self_attn(
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\brpac\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 322, in forward
        attn_output = torch.bmm(attn_probs, value_states)
    RuntimeError: could not create a primitive descriptor for a matmul primitive

---
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.07it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.17it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.08it/s]

Additional information

There are two normal runs, without the embedding, on either side of the errored attempt.

@brpack1968 brpack1968 added the bug-report Report of a bug, yet to be confirmed label Dec 6, 2023
@w-e-w
Copy link
Collaborator

w-e-w commented Dec 6, 2023

@Nuullll

@Nuullll
Copy link
Contributor

Nuullll commented Dec 6, 2023

Thanks for reporting. I'll have a look.

Nuullll added a commit to Nuullll/stable-diffusion-webui that referenced this issue Dec 6, 2023
Cast `torch.bmm` args into same `dtype`.

Fixes the following error when using Text Inversion embedding (AUTOMATIC1111#14224):

```
RuntimeError: could not create a primitive descriptor for a matmul
primitive
```
@Nuullll
Copy link
Contributor

Nuullll commented Dec 6, 2023

#14229

@Gourieff
Copy link
Contributor

Gourieff commented Dec 8, 2023

Hi guys
Almost the same is here with 1.7.0-RC + Intel ARC GPU and ReActor + FaceXLib
Gourieff/sd-webui-reactor#245

...stable-diffusion-webui-1.7.0-RC\venv\lib\site-packages\facexlib\parsing\bisenet.py", line 105, in forward
feat_atten = torch.mul(feat, atten)
RuntimeError: could not create a primitive descriptor for a reorder primitive

With 1.6.1 everything works ok

Unfortunately I don't have Intel Arc GPU to test it with 1.7.0-RC + ReActor
I would really appreciate it if you guys would research this point whether it takes place or not with latest commits for 1.7.0

@Nuullll
Copy link
Contributor

Nuullll commented Dec 9, 2023

Hi guys Almost the same is here with 1.7.0-RC + Intel ARC GPU and ReActor + FaceXLib Gourieff/sd-webui-reactor#245

...stable-diffusion-webui-1.7.0-RC\venv\lib\site-packages\facexlib\parsing\bisenet.py", line 105, in forward
feat_atten = torch.mul(feat, atten)
RuntimeError: could not create a primitive descriptor for a reorder primitive

With 1.6.1 everything works ok

Unfortunately I don't have Intel Arc GPU to test it with 1.7.0-RC + ReActor I would really appreciate it if you guys would research this point whether it takes place or not with latest commits for 1.7.0

Not the same error as this ticket.

It's probably another issue of underlying oneDNN implementation, I've seen "could not create a primitive descriptor for a reorder primitive" with a mul(x, sigmoid(y)) sequence with ControlNet normalbae annotator.

And facexlib bisenet triggers this error with the same sequence:
https://github.com/xinntao/facexlib/blob/260620ae93990a300f4b16448df9bb459f1caba9/facexlib/parsing/bisenet.py#L97-L107

Unfortunately, I haven't figured out the workaround yet.

@Nuullll
Copy link
Contributor

Nuullll commented Dec 9, 2023

With 1.6.1 everything works ok

BTW, IPEX support was first introduced in 1.7.0-RC. What do you mean by "with 1.6.1 everything works ok"?

@Gourieff
Copy link
Contributor

Gourieff commented Dec 9, 2023

BTW, IPEX support was first introduced in 1.7.0-RC. What do you mean by "with 1.6.1 everything works ok"?

I mean w/o IPEX support implementation but running on the same PC build

@HyunJae5463
Copy link

HyunJae5463 commented Dec 12, 2023

It's not working on Vlads IPEX Version either, but it works on DirectML so its surely IPEX related. These things really make me regret buying an Intel Card. I was a noob and thought "wow cheap 16 GB Vram for AI stuff thats great" but getting stuff to work is just a miserable experience.

@brpack1968
Copy link
Author

It's not working on Vlads IPEX Version either, but it works on DirectML so its surely IPEX related. These things really make me regret buying an Intel Card. I was a noob and thought "wow cheap 16 GB Vram for AI stuff thats great" but getting stuff to work is just a miserable experience.

I haven't had any issues with embeddings with SD.Next and my A750 that weren't operator error. Certainly not the error I reported here.

@HyunJae5463
Copy link

HyunJae5463 commented Dec 12, 2023

It's not working on Vlads IPEX Version either, but it works on DirectML so its surely IPEX related. These things really make me regret buying an Intel Card. I was a noob and thought "wow cheap 16 GB Vram for AI stuff thats great" but getting stuff to work is just a miserable experience.

I haven't had any issues with embeddings with SD.Next and my A750 that weren't operator error. Certainly not the error I reported here.

Sorry for the confusion, was referring to issues with FaceXLib. Like if you try to use Faceswap Extensions with IPEX you get broken results and FaceXLib Errors in Vlads as well.

@Nuullll
Copy link
Contributor

Nuullll commented Dec 13, 2023

@HyunJae5463 Can you please try to edit the facexlib source code to see if this works?

# venv\lib\site-packages\facexlib\parsing\bisenet.py", line 105
-        feat_atten = torch.mul(feat, atten)
+        feat_atten = torch.mul(atten, feat)

@HyunJae5463
Copy link

@HyunJae5463 Can you please try to edit the facexlib source code to see if this works?

# venv\lib\site-packages\facexlib\parsing\bisenet.py", line 105
-        feat_atten = torch.mul(feat, atten)
+        feat_atten = torch.mul(atten, feat)

This seems to fix that GFPGAN was not working at all, but swapped faces are still very pixelated and blurry.
Like left is normal, right is when you try to use faceswap extensions. This works normal when i use DirectML and only gives weird results in IPEX.
288907814-90816b0a-d74f-4752-9c1c-4576013d868e

@Gourieff
Copy link
Contributor

Both GFPGAN and CodeFormer? Such result means that the facerestoration method (inside ReActor or in common) doesn't work for some reason with IPEX build 🤔

This is the part of the ReActor's code where the script restores the face, it's rather simple implementation and it works if GFPGAN and CodeFormer modules are ready to work:

original_image = result_image.copy()
numpy_image = np.array(result_image)
if enhancement_options.face_restorer.name() == "CodeFormer":
      numpy_image = codeformer_model.codeformer.restore(
          numpy_image, w=enhancement_options.codeformer_weight
      )
else:
      numpy_image = enhancement_options.face_restorer.restore(numpy_image)
restored_image = Image.fromarray(numpy_image)
result_image = Image.blend(
    original_image, restored_image, enhancement_options.restorer_visibility
)

Could you please try to restore your result via Extras tab with A1111's integrated GFPGAN or CodeFormer restoration option (without using ReActor)? Will it succeed?

@HyunJae5463
Copy link

HyunJae5463 commented Dec 13, 2023

Both GFPGAN and CodeFormer? Such result means that the facerestoration method (inside ReActor or in common) doesn't work for some reason with IPEX build 🤔

This is the part of the ReActor's code where the script restores the face, it's rather simple implementation and it works if GFPGAN and CodeFormer modules are ready to work:

original_image = result_image.copy()
numpy_image = np.array(result_image)
if enhancement_options.face_restorer.name() == "CodeFormer":
      numpy_image = codeformer_model.codeformer.restore(
          numpy_image, w=enhancement_options.codeformer_weight
      )
else:
      numpy_image = enhancement_options.face_restorer.restore(numpy_image)
restored_image = Image.fromarray(numpy_image)
result_image = Image.blend(
    original_image, restored_image, enhancement_options.restorer_visibility
)

Could you please try to restore your result via Extras tab with A1111's integrated GFPGAN or CodeFormer restoration option (without using ReActor)? Will it succeed?

The face is blurry and pixelated with both, GFPGAN and CodeFormer. Reactor has a "Face Mask Correction" feature for this, but its not working at all with IPEX. Since the code change suggested above it does no longer give an error, but also does not swap any faces. Before the code change the Face Mask Correction Feature gave the error i have posted here Gourieff/sd-webui-reactor#245 (comment) in case that this helps.

And i'm not quite for what you want me to do in Extras. If you want me to put the pixelated and blurry picture into extras and enable CodeFormer or GFPGAN its still pixelated and blurry.
00003

If you wanted me to put the normal source image into extras before the face swap and enable either face restoration option, then everything works normally and the output is not pixelated or blurry.
00006

If i use Reactor via extras and set "Restore Face" inside Reactor to "none" and use A1111's face restoration instead the result is again pixelated and blurry.

So both GFPGAN and CodeFormer are working as intended i guess, but as soon as i use Reactor to swap a face the swapped face is pixelated and blurry.

@Gourieff
Copy link
Contributor

Gourieff commented Dec 14, 2023

And i'm not quite for what you want me to do in Extras. If you want me to put the pixelated and blurry picture into extras and enable CodeFormer or GFPGAN its still pixelated and blurry.

If i use Reactor via extras and set "Restore Face" inside Reactor to "none" and use A1111's face restoration instead the result is again pixelated and blurry.

This means that face restoration doesn't work with Intel Arc GPUs + IPEX (in current RC build) no matter you use ReActor or not

@HyunJae5463
Copy link

HyunJae5463 commented Dec 14, 2023

And i'm not quite for what you want me to do in Extras. If you want me to put the pixelated and blurry picture into extras and enable CodeFormer or GFPGAN its still pixelated and blurry.

If i use Reactor via extras and set "Restore Face" inside Reactor to "none" and use A1111's face restoration instead the result is again pixelated and blurry.

This means that face restoration doesn't work with Intel Arc GPUs + IPEX (in current RC build) no matter you use ReActor or not

Face Restoration works fine. Its only broken when i use Reactor to swap a face. ONLY swapped faces are broken, blurry and pixelated. Normal generated faces via A1111 Face restorations are fine.

@brpack1968
Copy link
Author

And i'm not quite for what you want me to do in Extras. If you want me to put the pixelated and blurry picture into extras and enable CodeFormer or GFPGAN its still pixelated and blurry.

If i use Reactor via extras and set "Restore Face" inside Reactor to "none" and use A1111's face restoration instead the result is again pixelated and blurry.

This means that face restoration doesn't work with Intel Arc GPUs + IPEX (in current RC build) no matter you use ReActor or not

Face Restoration works fine. Its only broken when i use Reactor to swap a face. ONLY swapped faces are broken, blurry and pixelated. Normal generated faces via A1111 Face restorations are fine.

I need to ask. Why have you hijacked my bug report for something completely unrelated? What does this have to do with the embeddings problem I reported. A bug which has apparently been fixed in the latest RC, BTW.

@Gourieff
Copy link
Contributor

Gourieff commented Dec 15, 2023

ONLY swapped faces are broken

This is just an image, and you said earlier that when you tried to restore swapped image with help of A1111's built-in face restoration (with ReActor disabled) - the image remains blurry and pixelated - this means that face restoration doesn't work in IPEX build with Intel Arc GPU for some reason and ReActor has nothing to do with this, because it's very simple script uses the same algorithms and libs as A1111's built-in face restoration does

If you want to help to resolve this Issue - please don't mislead

Here's the test image - please put it to Extras and try to restore it with A1111's built-in CodeFormer or GFPGAN (and ReActor disabled)
2428971

@Gourieff
Copy link
Contributor

Gourieff commented Dec 15, 2023

I need to ask. Why have you hijacked my bug report for something completely unrelated? What does this have to do with the embeddings problem I reported. A bug which has apparently been fixed in the latest RC, BTW.

Sorry for that my friend, but it was almost the same error and related to IPEX build
If your Issue has been resolved you can close this report and we will continue in another thread

ruchej pushed a commit to ruchej/stable-diffusion-webui that referenced this issue Sep 30, 2024
Cast `torch.bmm` args into same `dtype`.

Fixes the following error when using Text Inversion embedding (AUTOMATIC1111#14224):

```
RuntimeError: could not create a primitive descriptor for a matmul
primitive
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

5 participants