-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: VRAM usage is way higher #6307
Comments
when did you last update webui? This maybe from a windows update. you may want to disable browser hardware acceleration. I've found openoutpaint extension automatically uses some vram with browser hardware acceleration |
Same issue here, for a simple 5.x5 i cant even use with the normal sd 2.1 model or any upscale. That happened with the new update today. :/ |
I updated the WebUI around 2 PM UTC+1. The last major Windows update was a few weeks ago. When I used the WebUI a few days ago, everything still worked without any errors, and I don't have the openoutpaint extension. |
I made a fresh install right now with a RTX4090. Running out of VRAM constantly, never happened before. RuntimeError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 23.99 GiB total capacity; 12.81 GiB already allocated; 0 bytes free; 21.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. |
Here's the result I got: Time taken: 4m 49.25sTorch active/reserved: 4777/6598 MiB, Sys VRAM: 8192/8192 MiB (100.0%)` Commit hash: 24d4a08 |
@Alphyn-gunner It's twice the size because of the hires upscale value. |
I have the same problem, and I don't even use the hi-res fix! I just do normal gen but the VRAM usage is WAYYYYY higher now! I can't do the same batch size that I used to be able to do previously! Everything else is the same, I changed nothing. It only git pulled.. |
In the latest versions, hires fix have been modified. Do the 5f4fa94 versions also have bugs? |
For what it's worth I've also noticed this when training an embedding as of updating today via a fresh install. Previously I could train a 512/512 embedding and use the "Read parameters" option on the SD1.4 checkpoint. The message I get states 512mb additional VRAM is needed. For experimentation, I lowered the 512 values and the embedding began to train. However, when it tried to generate an image mid-training, the CUDA memory issue occurred again. It is worth noting that I'm able to use regular prompts as well as the embedding that was terminated early after running out of memory. So this might be helpful in determining what the cause is. |
Same here, as suggested using a less extreme upscale option worked. However, it is considerably slower still. having different highers fix back ends is nice and might yield better results, but why is this the only option? Why not add both? What is the last known commit that doesn't have this change? I think I'll switch back for that in the time being. |
The currently Hires. Fix seems to be tuned much more for higher end cards. |
For now you could always checkout a previous version:
This is the one I'm using for the time being as I find the system pretty much unusable as it is now. |
Yes, I use xformers. What do you mean by image size limit? |
I have the same issue. Found it while using Hi-res fix. I completely understand how to use it, that's not the issue. Now I run out of vram for the same batch sizes/dimensions as before @lolxdmainkaisemaanlu also pointed out the same except they are not even using hi-res. I just happened to notice it on hi-res. Its an independent issue from hi-res fix it seems. reverting fd4461d as well curtousy to @DrGunnarMallon |
I'm running A1111 on a 2060 Super, so 8GB of VRAM. I had a bit of a workflow to do a couple of 512x512 low-level passes, and then bumped it up to 768 to start getting in detail, finally finishing off and upscaling to 1024. I've been doing passes of this process for almost a week (I've been making daily "Twelve Days of Christmas" images). Even on my older card, it works. Now, even going from 512 to 768 with just 50 steps it just wrecks. I currently cannot render anything at 768x768. I tried resetting to the hash recommended above, but I'm still going OOM. Is there another hash to recommend reverting to prior to that?
|
4af3ca5 try that one. the other repo was throwing errors for me as well. Currently back up and running like I was before trying to get the latest build. |
That one isn't working for me either. Still going OOM. After bashing |
When you open your auto1111 cmd, it tells you the commit version as soon as you run the webui.bat |
I restored back to the master branch and, NVidia just put out a driver update. One of the two affected things, so at least I'm getting things to work better. Memory usage SEEMS better. Still watching it though for a bit. |
Did you add |
I'm not sure how related this is, but I haven't seen anybody else mention it. |
Having the same issue with just loading the Webui immediately uses and keeps using 5 out of the 8 GB of VRAM with each generation the amount of VRAM in use seems to increase by a few MB ... (which stacks up fast over time) ... img2img is a no go at all as it immediately OoM's |
Same issue here. RuntimeError: CUDA out of memory. Tried to allocate 76.38 GiB (GPU 0; 12.00 GiB total capacity; 2.57 GiB already allocated; 7.19 GiB free; 2.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
See possible source in "new hires": #6725 |
I do not use Hires Fix, but I can no longer change models on Colab because it causes memory overflow: --lowram, --lowvram and --medvram options no helped. This is the default RAM reservation at startup: Update: I found a solution:
Regardless, I saw that every time I change the model, it occupies 1 GB more memory, so after a while it causes a memory overflow again. |
I have this problem as well. It consists of..
|
Hey guys, I got a similar issue : I updated the UI, and for some reason the VRAM usage skyrocketted. So if you just updated the UI and you're now running out of VRAM, remove the command lines for the updates. Hopefully it helps! |
Which file did you edit? |
The launcher (the webui-user.bat file). I had put two command lines for the updates, thinking it would only affect the launch, but it was actually taking 3GB VRAM for no reason. In your case that doesn't seem to be the issue. Sorry I can't help ^^'. |
I've made two PRs that I think will finally address this. voldy (auto) has also made recent improvements to the I also closed #6725 and #7002 since this issue is the most relevant. The former was just asking for old hires fix to be added back (where width/height is specified manually, which is supported) and the latter is technically a duplicate of this issue. |
Closing this as I've done a few tests and VRAM usage is significantly lower as of the latest |
Is there an existing issue for this?
What happened?
I updated the WebUI a few minutes ago and now the VRAM usage when generating an image is way higher. I have 3 monitors (2x 1920x1080 & 1x 2560x1440), I use Wallpaper Engine on all of them, but I have Discord open on of them nearly 24/7, so Wallpaper Engine is only active for two monitors. 1.5 GB VRAM are used when I am on the Desktop without the WebUI running.
Web Browers: Microsoft Edge (Chromium)
OS: Windows 11 (Build number: 22621.963)
GPU: NVIDIA GeForce RTX 3070 Ti (KFA2)
CPU: Intel Core i7-11700K
RAM: Corsair VENGEANCE LPX 32 GB (2 x 16 GB) DDR4 DRAM 3200 MHz C16
Steps to reproduce the problem
What should have happened?
The generation should complete without any errors
Commit where the problem happens
1cfd8ae
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
Additional information, context and logs
I have the config for
animefull
from the Novel AI leak in the configs folder under the nameAnything V3.0.yaml
, but I get this error too when I remove it from the configs folder and completely restart the WebUI. This is the error I getThe text was updated successfully, but these errors were encountered: