-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak On Queries (Due To Caching/Certain Parse-Server And Node Versions) #6405
Comments
Have you also tried the |
Yes, we tried that. The true fix was to just spin up Heroku Redis and then just update the redis cache adapter. Then the memory issue just goes away. Was pretty easy to do honestly. Although, it seems a little silly since parse server should work right out of the box without memory issues. |
@danielchangsoojones We had this discussion previously and I came up with a solution. #6193 (comment) Its on my todo list, I'll try to get to it before the end of the month.
It did work out of the box without memory issues for a single instance. What about multiple instances running against the same DB? This created an issue of over validating the schema. Just a quick history lesson. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@danielchangsoojones It would be great if you could try the new master branch and confirm whether #7214 solves the memory issue. |
Issue Description
During busy times (aka maybe 100 concurrent users), our heroku application started to get really slow. It was reaching 1GB - 2GB of RAM memory on our server (a pretty insane amount for our small scale). When we were running lots of queries, it would get super slow or it would just eventually H10 crash the heroku server. As we started analyzing the memory heap, it was apparent that, instead of adding memory and returning to the normal baseline, the queries were actually just building memory linearly. Something was just not getting garbage collected, so eventually the memory was just building every time we ran an API path. This was happening very consistently and easily reproducible. Anyway, we found other issues on Github that talked about how certain versions of parse server and node actually have a memory leak where a queries cache is never garbage collected. Parse server basically creates a new cache for each individual query. One fix we have found (and other people have mentioned in Github issues) was to just switch our cacheAdapter to Redis. This basically just moves all the caching logic to Redis, and Redis seems to not have this memory issue. Another fix seems to be that people found a certain version of parse-server and also node version where the memory issue just did not happen. We tried to upgrade to the latest node version (12.16.0, which is recommended for most users on the Node website) and the latest parse-server version (3.10.0). Unfortunately, we still had the memory issue when we were on the two latest versions. So, our current fix is to just use Redis. But, this seems like a major issue that the latest parse server version (3.10.0) and latest node version (12.16.0) have a major memory leak on every query. It seems like something like this should work right out of the box, and I'm wondering if others are having this issue also.
Steps to reproduce
Expected Results
You would expect for the heap dump to initially print at some baseline (~30 MB) and then when you run a query it will jump up a bit (maybe around ~35MB). Then, when you finish the API call, it just returns to baseline (30MB) because node should garbage collect. This way, every time you run a query, it uses some memory, and then it just goes back to normal until the next query.
Actual Outcome
It's not garbage collecting. So the first time you run the query, you go from 30 MB, then to 35MB, then to 40MB, then to 45MB. It just keeps growing linearly, until eventually, it gets up to around 2GB. Then, everything crashes because that goes above the heroku default limit for RAM memory. My theory is that it seems to be holding onto the cache of each query and not garbage collecting it.
Environment Setup
Server
Database
The text was updated successfully, but these errors were encountered: