-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gitea 1.12.1 arm-6 generates huge load on synology DS216j #12016
Comments
If you have many repositories it will get back to normal load when all language statistics have been calculated |
We should add this to the blog so that users know what will happen when they upgrade to v1.12. |
Same experience here. I've been running it on a 512MB VPS and 1.12 fails due to RAM error. 1.11.8 works fine. |
I do not know if this is relevant or not, and certainly at a much lower scale I would no expect to see any I spun up 2 docker containers, without even having setup and configured gitea at all i.e. without any repositories, this is what I got after letting in run for a while: docker ps htop -p 20777,21186 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command is this to be expected? comment added: |
After following lafrikses advice everything came back to normal. I'm running 1.12.1 since then with no other issues. BTW, thanks to all for the prompt replies. |
letting it run 1.5 days longer, still not a big issue, but 1.12 is still using 9*more cpu than 1.11, and I wonder why, as this is till a installation without any repository htop |
Same issue here, running on a Raspberry Pi 3 with 1GB RAM.
Thanks! |
From the comments it's very difficult to determine if these complaints are regarding base load for a brand new instance or are regarding load just following migration. There are a number of heavy working migrations between 1.11 and 1.12 and therefore it is not surprising that load would increase considerably during the migration. It would be helpful to disentangle these issues from the base load. If there are complaints about load during migration and immediately following migration - I'm afraid there's not much we can do. This is work that has to happen - but will only happen once. If there are complaints about base load - well then we need more information. Just giving us the information from htop or top isn't very useful - in the administration pages there is a process page that will give you an idea of how many workers have been created, and what processes are currently being run. There is also information about current load on the dashboard. The provided logging gists are also unfortunately unhelpful. Although the default logging sets [database]
...
LOG_SQL=false
...
[log]
MODE=console
LEVEL=debug
REDIRECT_MACARON_LOG=true
MACARON=console
ROUTER=console or send these logs to files and provide them. If there is genuinely increased base load it may be helpful to tune the default queue configurations - as it's possible that too many things are being attempted at once. Consider reducing the MAX_WORKERS for all queues to 1? [queue]
...
MAX_WORKERS=1 Consider switching the default queue type to a level queue? [queue]
TYPE=level There is also the issue that Go 1.14 has caused increased issues with ASYNC_PREEMPT which we have had to turn off. This could be causing issues with increased load? Compiling with Go 1.13 might help? |
@neiaberau have you restarted two gitea instances after the migrations finished? |
In my case, it's about base load since upgrading to 1.12.0 (I'm currently running 1.12.2). This also occurs with MAX_WORKERS set to 1:
I haven't changed the default queue type. Unless the default type has changed in 1.12.0, I don't see how that could suddenly start causing issues? I've changed the log configuration as advised. The only thing of note is that
and there are also 3 occurrences of this stack trace:
Are there any binaries available of Gitea 1.12+ that were compiled with Go 1.13? |
I guess these are all different issues, [queue.issue_indexer] [queue] |
Don't set your workers to 0, nothing will get done. That is an absolutely and utterly terrible idea. Worse you'll get deadlocks as the queues get full! |
You need to work out if it is a particular queue is causing the problem or if the default queue settings use too many resources. If I had to bet - I would guess that the repo stats indexer was using too much memory - AFAIU there was an issue which should have been resolved in the latest 1.12 version. You do not say which version you are using. If - however - it's not a particular queue which is causing this problem then it's worth noting that there are multiple different options for Queues - the default settings are likely not the correct for low powered systems. For example: The default queue is the persistable channel queue - that involves a memory channel queue which will get dumped to a file level queue at shutdown. It's possible that you might be better off changing the type to a simple channel queue - with the potential loss of data on shutdown - or to a disk only queue or redis queue. It might be that the underlying length of the memory components for queues is too big and you need to set them smaller. [repository]
...
; Mirror sync queue length, increase if mirror syncing starts hanging
MIRROR_QUEUE_LENGTH = 1000
; Patch test queue length, increase if pull request patch testing starts hanging
PULL_REQUEST_QUEUE_LENGTH = 1000
...
[webhook]
; Hook task queue length, increase if webhook shooting starts hanging
QUEUE_LENGTH = 1000
...
[task]
; Task queue length, available only when `QUEUE_TYPE` is `channel`.
QUEUE_LENGTH = 1000
...
[queue]
; Default queue length before a channel queue will block
LENGTH = 20
... Reducing the length for most of these things would help reduce the amount of memory used. Switching to a disk queue, or redis queue might help too. |
as I wrote above
setting it workers to 1 seams still ok, as long as I as keep the type=dummy |
Using dummy as the queue type means that there are no workers as no work gets done. You cannot have dummy as your queue type. Have things improved with 1.12.5? My suspicion is that the problem is that the repo stats indexer is to blame for the increased load. Does reducing the length of its queue help? |
I just tried again with the latest 1.13.2 after using 1.11.8 for longer than I had wanted. It looks like @zeripath is correct and the high load on startup is caused my migration tasks. I have 512 MB RAM and a large SWAP file. So when I first started 1.13.2 wait-io was very high and it used 500 MB+ in RAM. After about 30 min it stabilized and the next startups were all fast and it only uses about 200 MB RAM now. I also adjusted the number of workers, as suggested above, but can't say if it made a big difference:
|
And you could change the default |
Gitea version (or commit ref):
1.12.0, 1.12.1 (problematic)
1.11.8 and earlier (ok)
Git version:
git version 2.26.2
Operating system:
Linux version 3.10.105 (root@build4) (gcc version 4.9.3 20150311 (prerelease) (crosstool-NG 1.20.0) ) Fix default value for LocalURL #25426 SMP Tue May 12 04:42:24 CST 2020
Database (use
[x]
):Can you reproduce the bug at https://try.gitea.io:
Log gist:
https://gist.github.com/ledofnrn/84ddc286b35e72f931c27046b4f9b4df
Description
On Synology DS216j (2 core armada cpu, 512MB ram) all versions of gitea up to 1.11.8 have been working very well. After upgrading to 1.12.0 (the same with 1.12.1) the load increases hugly:
which causes gitea web page to start enormously slowly:
Downgrading to 1.11.8 or earlier decreases the load to reasonable level:
And the webpage opens promptly again:
...
In each case the gitea arm-6 build downloaded from the gitea download site was used.
Please let me know if you need any logs or setup/config details.
Screenshots
The text was updated successfully, but these errors were encountered: