Dynamic_vram: Remove Aimdo exemption for empty_cache (fixes VRAM leak)#12260
Merged
comfyanonymous merged 1 commit intoComfy-Org:masterfrom Feb 4, 2026
Merged
Conversation
Its more important to get the torch caching allocator GC up and running than supporting the pyt2.7 bug. Switch it on. Defeature dynamic_vram + pyt2.7.
rattus128
added a commit
to rattus128/ComfyUI
that referenced
this pull request
Feb 9, 2026
This change was only needed to get around the pytorch 2.7 mempool bugs, and should have been reverted along with Comfy-Org#12260. This fixes a different memory leak where pytorch gets confused about cache emptying.
rattus128
added a commit
to rattus128/ComfyUI
that referenced
this pull request
Feb 9, 2026
This change was only needed to get around the pytorch 2.7 mempool bugs, and should have been reverted along with Comfy-Org#12260. This fixes a different memory leak where pytorch gets confused about cache emptying.
comfyanonymous
pushed a commit
that referenced
this pull request
Feb 9, 2026
* revert threaded model loader change This change was only needed to get around the pytorch 2.7 mempool bugs, and should have been reverted along with #12260. This fixes a different memory leak where pytorch gets confused about cache emptying. * load non comfy weights * MPDynamic: Pre-generate the tensors for vbars Apparently this is an expensive operation that slows down things. * bump to aimdo 1.8 New features: watermark limit feature logging enhancements -O2 build on linux
luna-niemitalo
pushed a commit
to luna-niemitalo/ComfyUI
that referenced
this pull request
Feb 11, 2026
Its more important to get the torch caching allocator GC up and running than supporting the pyt2.7 bug. Switch it on. Defeature dynamic_vram + pyt2.7.
luna-niemitalo
pushed a commit
to luna-niemitalo/ComfyUI
that referenced
this pull request
Feb 11, 2026
* revert threaded model loader change This change was only needed to get around the pytorch 2.7 mempool bugs, and should have been reverted along with Comfy-Org#12260. This fixes a different memory leak where pytorch gets confused about cache emptying. * load non comfy weights * MPDynamic: Pre-generate the tensors for vbars Apparently this is an expensive operation that slows down things. * bump to aimdo 1.8 New features: watermark limit feature logging enhancements -O2 build on linux
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
It's more important to get the torch caching allocator GC up and running than supporting the pyt2.7 bug. Switch it on.
Defeature dynamic_vram + pyt2.7.
Example test conditions:
RTX3060, 32GB RAM, Windows, pyt 2.8, --fast dynamic_vram
LTX tiled decode then clear the cache in the GUI
Before (2.4GB VRAM after clear):
After (0.7 VRAM after clear):