Skip to content

Conversation

@NeuralFault
Copy link
Contributor

Changed index from 110X-dgpu to 110X-all as former stopped getting builds and was merged with other 110X archs in the latter index.

Added new environment variables to enhance GPU stability and performance, including settings to prevent out-of-memory errors and re-enable CUDNN in ComfyUI to allow proper MiOpen support.

Changed index from 110X-dgpu to 110X-all as former stopped getting builds and was merged with other 110X archs in the latter index.

Added new environment variables to enhance GPU stability and performance, including settings to prevent out-of-memory errors and re-enable CUDNN in ComfyUI to allow proper MiOpen support.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @NeuralFault, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the robustness and compatibility of ComfyUI installations on Windows systems utilizing AMD ROCm. It updates the ROCm nightly build index to a more current and comprehensive source, and crucially, introduces specific environment variables designed to mitigate common GPU-related issues such as out-of-memory errors and ensure the correct functioning of MiOpen for accelerated computations.

Highlights

  • ROCm Nightly Index Update: The ROCm nightly index for gfx110X architectures has been updated from gfx110X-dgpu to gfx110X-all to ensure continued access to the latest builds, as the former index is no longer receiving updates.
  • GPU Stability and Performance Environment Variables: New environment variables have been introduced for Windows ROCm to enhance GPU stability and performance. This includes PYTORCH_ALLOC_CONF to prevent out-of-memory errors and COMFYUI_USE_MIOPEN to re-enable CUDNN for proper MiOpen support within ComfyUI.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the ROCm nightly index URL for gfx110X architectures and adds two new environment variables for Windows ROCm users to improve GPU stability and performance in ComfyUI. The changes look good and align with the description. I've added a couple of suggestions to improve code readability and maintainability by refactoring long comments and suggesting the use of constants for magic numbers. Overall, a good improvement for AMD users.

.SetItem("MIOPEN_FIND_MODE", "2")
.SetItem("TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL", "1");
.SetItem("TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL", "1")
.SetItem("PYTORCH_ALLOC_CONF", "max_split_size_mb:6144,garbage_collection_threshold:0.8") // greatly helps prevent GPU OOM and instability/driver timeouts/OS hard locks and decreases dependency on Tiled VAE at standard res's
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line contains magic numbers (6144, 0.8) and a very long comment, which can impact readability and maintainability.

I recommend extracting the numeric values into named constants to make their purpose explicit and easier to modify in the future. For example:

private const int PytorchAllocMaxSplitSizeMb = 6144;
private const double PytorchAllocGcThreshold = 0.8;

Additionally, the comment should be wrapped to fit within standard line length limits. I've provided a suggestion for reformatting the comment. Please consider applying the constants as well.

            // Greatly helps prevent GPU OOM, instability, driver timeouts, and OS hard locks.
            // It also decreases dependency on Tiled VAE at standard resolutions.
            .SetItem("PYTORCH_ALLOC_CONF", "max_split_size_mb:6144,garbage_collection_threshold:0.8")

.SetItem("TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL", "1");
.SetItem("TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL", "1")
.SetItem("PYTORCH_ALLOC_CONF", "max_split_size_mb:6144,garbage_collection_threshold:0.8") // greatly helps prevent GPU OOM and instability/driver timeouts/OS hard locks and decreases dependency on Tiled VAE at standard res's
.SetItem("COMFYUI_USE_MIOPEN", "1"); // re-enables "cudnn" in ComfyUI as it's needed for MiOpen to function properly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is quite long and makes the line exceed typical length recommendations. For better readability, it's good practice to place comments on their own line(s) above the code they refer to, especially when they are descriptive.

            // Re-enables "cudnn" in ComfyUI as it's needed for MIOpen to function properly.
            .SetItem("COMFYUI_USE_MIOPEN", "1");

Copy link
Contributor

@mohnjiles mohnjiles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet, thanks!

@mohnjiles mohnjiles merged commit 4516c34 into LykosAI:main Jan 19, 2026
2 of 3 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Jan 19, 2026
@NeuralFault NeuralFault deleted the ComfyUI-WinROCm-update branch January 19, 2026 01:16
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants