Skip to content

Conversation

@BradPepersAMD
Copy link

This change keeps the default behavior where MIOpen use is disabled on many AMD arch but allows you to set COMFYUI_ENABLE_MIOPEN=1 as an env var to override this and keep using MIOpen. The goal is to make it easier to test with this on or off without changing code. AMD is also looking to gather information to help us resolve this problem and to do so, we need an easy way for users to turn on use of MIOpen.

You can see the discussion around gathering the data here.

@jammm
Copy link

jammm commented Dec 17, 2025

@comfyanonymous we would appreciate a quick merge here :)

@RandomGitUser321
Copy link
Contributor

RandomGitUser321 commented Dec 17, 2025

I already explained it a while ago in another PR:

#10678 (comment)

Why make it an environmental variable when you can just make it a launch arg like --force-cudnn-enabled?

In cli_args.py, you'd add something like:
parser.add_argument("--force-cudnn-enabled", action="store_true", help="Force ComfyUI to use CUDNN descriptive words with more words following words")

The model_management.py already imports the args.

I remember testing it and it worked, just took a simple if/else statement around what's already in the model_managment.py. Nobody really wants to juggle environmental variables, just make it a launch flag. But I suppose it's not that big of a deal to add set COMFYUI_ENABLE_MIOPEN=1 to the startup bat. Just seems cleaner to make it an arg and add some help info, so that people know it's even there.

In cli_args.py somewhere around line 90 is where I put this:

parser.add_argument("--force-cudnn-enabled", action="store_true", help="helpful words")

In model_management.py around line 340:

if not (any((a in arch) for a in AMD_RDNA2_AND_OLDER_ARCH)):
	if args.force_cudnn_enabled:
		torch.backends.cudnn.enabled = True
		logging.info("Set: torch.backends.cudnn.enabled = True by user")
	else:
		torch.backends.cudnn.enabled = False  # Seems to improve things a lot on AMD
		logging.info("Set: torch.backends.cudnn.enabled = False for better AMD performance.")

That should be along the lines of what I did, when I tested it.

@jammm
Copy link

jammm commented Dec 17, 2025

I already explained it a while ago in another PR:

#10678 (comment)

Ideally we'd like users to run their comfyui as-is without changing any commands so it's easy to re-use existing workflows. We're fine with either this or #10678 being merged.

@RandomGitUser321
Copy link
Contributor

RandomGitUser321 commented Dec 18, 2025

Well either way, cudnn breaks VAE decodes on my Windows native setup with TheRock nightly wheels from https://rocm.nightlies.amd.com/v2/gfx110X-all/ (tested with 2.11.0a0+rocm7.11.0a20251217). Both float16 and float32 create a bunch of noise and bfloat16 will spit some HIP related error in the console. You can see it working fine in the previews though. I only tested with the Flux VAE and I made sure to clear the .miopen cache between tests.

@BradPepersAMD BradPepersAMD marked this pull request as ready for review December 18, 2025 19:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants