CUDA out of memory errors after several frames of batch input #23
Replies: 2 comments 1 reply
-
Hi, thanks. I'm pretty new to python, but it is my understanding the garbage collector frees memory as soon as the variable goes out of scope. That said, I'm aware that there must still be some memory leak somewhere as over long runs I also still get out of vram errors. The big disadvantage with batch img2img is that the midas model needs to be reloaded every image. If you cloned the midas repo into repositories/midas, you can run the
|
Beta Was this translation helpful? Give feedback.
-
Should be solved now in v0.1.9 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Thank you so much for developing this! A lot of my headspace is dedicated to extracting depth from SD outputs, video outputs etc but I am not technically capable so this is super liberating.
One issue with batch inputs - I am hitting CUDA memory errors after a certain number of frames because the GPU isn't clearing used VRAM. Is this something you could implement easily?
Beta Was this translation helpful? Give feedback.
All reactions