-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak / Memory "not recycled" in Actix 3.3 #1943
Comments
Here is a heaptrack of the example code I provided, for a single invocation with |
The first thing you can do is to streaming the response instead of just send it as a gaint chunk. There is no hard limit on outgoing response chunk size so you should take care it yourself by not shoot your foot. Memory is not recycled is a known issue and in most cases they are not leaked. Usually you end up with what you use most of the time and there is no constant growing of memory like the other issue talks about. |
I see streaming as orthogonal. Regardless of a streamed response, the service does not free the memory it uses. The moment I need to create an ephemeral data structure I'm left with its footprint on the heap forever. That doesn't sound right to me. Example: modify the code above to not return the data, only print the length of it, and return "hello" instead.
This still clogs up the same amount of memory, even though the function should exit and memory should be released. |
Like I said memory not recycled is a known issue. You can work with it or not it. |
Where is the issue tracked? I cannot find it on GitHub. |
You can read the issue you linked and my previous reply in this thread. |
@0snap Can you try the latest beta? It has a very positive benefit for us with a lot of connected clients. Still far from ideal but we're looking into it. |
@therealprof I tested with Recap: The code simply builds an in-mem structure and prints its length. The variables remain on the heap even after the request returns. The application was finally killed by my OS OOM killer. |
tested with actix-web = "4.0.0-beta.4", the leak persists. used with
|
Like I said this is not a leak. The memory is not recycled with every request and that's the issue here. Try to limit your thread with HttpServer::workers and see if it's a constant leak. It looks like a leak because your are running a server in multi thread and every thread would take turn to handle request. At last you would end up with a stable memory usage when every worker have take it's turn. The best thing you can do for now is to not naively allocate too much memory and use streams to properly limit your memory usage. |
Now it will never recycle the memory. But even if we do not naively allocate too much memory, the server will run out of memory at some time which is not expected. |
How would your server run ouf of memory? The memory not recycled would be re-used for future request. I believe you should get the issue straight here. The problem is server would stay at the peak memory usage with the largest memory allocation you do. It's NOT constant leak. Do not get me wrong. This is an issue but wrongly wording the issue is not a good thing. |
To offer a slightly different perspective, while it might be true that this is not a leak, this can lead to a problem with overcommitting a machine's resources in a multi-tenant situation. For us, we have many services running on a single host, with memory quotas (both soft and hard) for each service. The expectation is that any service could claim higher-than-expected (exceeding the soft quota, not the hard limit) resources for a short time, but we hope that many services will not claim higher-than-expected resources at the same time. With the memory for our actix-web services tracking the highwater mark, this moves us closer to overcommitting. A short-period high-mem status becomes a long-term high-mem status. To use a medical analogy, it's a bit like plaque building up on your artery walls, weakening your survival chances during a cardiac event. |
Surely it's an issue. No one is denying it exist and no one is saying isn't a problem. My point is let's describle the issue as it is so it can be focused and figured out quicker without side track and not knowing where to look at it. Progress has already been made to reduce memory footprint and more would follow. But ultimately actix-web does not leak memory and there is only so much we can do at lib level so this would be hard and require time and effort. Making false claim is not helping. It's also worth add that I'm also not saying people are making false claim on purpose. I know issues like this can be mixed and give a false feeling on the first glance and I'm trying to explain the situation not blaming people for wording it wrongly. If I sound like making blames then I'm sorry and it's not my intention. |
Possibly related to #1780
I have an
actix-web
service that creates new data structures in memory based on a user's request, transforms everything into JSON, and returns the result.The memory allocated per request is never released. This rapidly clogs up the host's memory, directly depending on the size of the newly allocated data. This behavior is particularly bad for requests that initiate heavy work and consume Gigabytes of RAM.
Expected Behavior
Once a request returns, all allocated memory should be freed.
Current Behavior
Allocated memory is not freed even after the route has returned the result to the user.
Possible Solution
Sorry, I have no proper solution at hand.
Steps to Reproduce (for bugs)
The following web service allocates a data structure depending on the user's request and returns a result in JSON. You can easily fire it up to consume lots of RAM by supplying a big number:
Invoke it via
curl localhost:8080/serde/1000000 > /dev/null
.I built two routes, because they allocate different amounts of memory -
serde
uses a bit more under the hood than thelame
one. This directly correlates to the memory consumed by the service. When you invoke the route multiple times with high values (10000000
and up) you can quickly exhaust the host.Context
I work on a web API for a database. The database client is invoked as subcommand, the service parses the process'
stdout
and returns the result in JSON to the caller. Depending on the user-defined query, the database can easily return a couple of hundred MBs. The service mangles that to JSON and then has substantially less memory available than before the request.Your Environment
Arch Linux
Linux tnz-490s 5.9.14-arch1-1 #1 SMP PREEMPT Sat, 12 Dec 2020 14:37:12 +0000 x86_64 GNU/Linux
rustc -V
):rustc 1.51.0-nightly (7a9b552cb 2021-01-12)
The text was updated successfully, but these errors were encountered: