Skip to content

Dynamic_vram: Remove Aimdo exemption for empty_cache (fixes VRAM leak)#12260

Merged
comfyanonymous merged 1 commit intoComfy-Org:masterfrom
rattus128:prs/dynamic-vram-fixes/empty-cache
Feb 4, 2026
Merged

Dynamic_vram: Remove Aimdo exemption for empty_cache (fixes VRAM leak)#12260
comfyanonymous merged 1 commit intoComfy-Org:masterfrom
rattus128:prs/dynamic-vram-fixes/empty-cache

Conversation

@rattus128
Copy link
Contributor

It's more important to get the torch caching allocator GC up and running than supporting the pyt2.7 bug. Switch it on.

Defeature dynamic_vram + pyt2.7.

Example test conditions:

RTX3060, 32GB RAM, Windows, pyt 2.8, --fast dynamic_vram
LTX tiled decode then clear the cache in the GUI

Before (2.4GB VRAM after clear):

before

After (0.7 VRAM after clear):

after

Its more important to get the torch caching allocator GC up and running
than supporting the pyt2.7 bug. Switch it on.

Defeature dynamic_vram + pyt2.7.
@rattus128 rattus128 changed the title mm: Remove Aimdo exemption for empty_cache Dynamic_vram: Remove Aimdo exemption for empty_cache (fixes VRAM leak) Feb 4, 2026
@comfyanonymous comfyanonymous merged commit 855849c into Comfy-Org:master Feb 4, 2026
12 checks passed
rattus128 added a commit to rattus128/ComfyUI that referenced this pull request Feb 9, 2026
This change was only needed to get around the pytorch 2.7 mempool bugs,
and should have been reverted along with Comfy-Org#12260. This fixes a different
memory leak where pytorch gets confused about cache emptying.
rattus128 added a commit to rattus128/ComfyUI that referenced this pull request Feb 9, 2026
This change was only needed to get around the pytorch 2.7 mempool bugs,
and should have been reverted along with Comfy-Org#12260. This fixes a different
memory leak where pytorch gets confused about cache emptying.
comfyanonymous pushed a commit that referenced this pull request Feb 9, 2026
* revert threaded model loader change

This change was only needed to get around the pytorch 2.7 mempool bugs,
and should have been reverted along with #12260. This fixes a different
memory leak where pytorch gets confused about cache emptying.

* load non comfy weights

* MPDynamic: Pre-generate the tensors for vbars

Apparently this is an expensive operation that slows down things.

* bump to aimdo 1.8

New features:
watermark limit feature
logging enhancements
-O2 build on linux
luna-niemitalo pushed a commit to luna-niemitalo/ComfyUI that referenced this pull request Feb 11, 2026
Its more important to get the torch caching allocator GC up and running
than supporting the pyt2.7 bug. Switch it on.

Defeature dynamic_vram + pyt2.7.
luna-niemitalo pushed a commit to luna-niemitalo/ComfyUI that referenced this pull request Feb 11, 2026
* revert threaded model loader change

This change was only needed to get around the pytorch 2.7 mempool bugs,
and should have been reverted along with Comfy-Org#12260. This fixes a different
memory leak where pytorch gets confused about cache emptying.

* load non comfy weights

* MPDynamic: Pre-generate the tensors for vbars

Apparently this is an expensive operation that slows down things.

* bump to aimdo 1.8

New features:
watermark limit feature
logging enhancements
-O2 build on linux
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants