Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak On Queries (Due To Caching/Certain Parse-Server And Node Versions) #6405

Closed
danielchangsoojones opened this issue Feb 12, 2020 · 5 comments · Fixed by #7214
Closed
Labels
type:bug Impaired feature or lacking behavior that is likely assumed

Comments

@danielchangsoojones
Copy link

danielchangsoojones commented Feb 12, 2020

Issue Description

During busy times (aka maybe 100 concurrent users), our heroku application started to get really slow. It was reaching 1GB - 2GB of RAM memory on our server (a pretty insane amount for our small scale). When we were running lots of queries, it would get super slow or it would just eventually H10 crash the heroku server. As we started analyzing the memory heap, it was apparent that, instead of adding memory and returning to the normal baseline, the queries were actually just building memory linearly. Something was just not getting garbage collected, so eventually the memory was just building every time we ran an API path. This was happening very consistently and easily reproducible. Anyway, we found other issues on Github that talked about how certain versions of parse server and node actually have a memory leak where a queries cache is never garbage collected. Parse server basically creates a new cache for each individual query. One fix we have found (and other people have mentioned in Github issues) was to just switch our cacheAdapter to Redis. This basically just moves all the caching logic to Redis, and Redis seems to not have this memory issue. Another fix seems to be that people found a certain version of parse-server and also node version where the memory issue just did not happen. We tried to upgrade to the latest node version (12.16.0, which is recommended for most users on the Node website) and the latest parse-server version (3.10.0). Unfortunately, we still had the memory issue when we were on the two latest versions. So, our current fix is to just use Redis. But, this seems like a major issue that the latest parse server version (3.10.0) and latest node version (12.16.0) have a major memory leak on every query. It seems like something like this should work right out of the box, and I'm wondering if others are having this issue also.

Steps to reproduce

  1. Set your parse-server version to 3.10.0
  2. Set your node-version to 12.16.0
  3. Run a heroku local server
  4. Make a simple cloud call that runs a very simple query
  5. Have a heap dump that prints before and after the query is run
  6. Keep running the cloud call like 20 times over and over and you can watch how the memory changes

Expected Results

You would expect for the heap dump to initially print at some baseline (~30 MB) and then when you run a query it will jump up a bit (maybe around ~35MB). Then, when you finish the API call, it just returns to baseline (30MB) because node should garbage collect. This way, every time you run a query, it uses some memory, and then it just goes back to normal until the next query.

Actual Outcome

It's not garbage collecting. So the first time you run the query, you go from 30 MB, then to 35MB, then to 40MB, then to 45MB. It just keeps growing linearly, until eventually, it gets up to around 2GB. Then, everything crashes because that goes above the heroku default limit for RAM memory. My theory is that it seems to be holding onto the cache of each query and not garbage collecting it.

Environment Setup

  • Server

    • parse-server version (Be specific! Don't say 'latest'.) : 3.10.0
    • Operating System: MacOS
    • Hardware: Macbook 16 In Pro
    • Localhost or remote server? (AWS, Heroku, Azure, Digital Ocean, etc): Local heroku server (it also builds this memory issue on the remote heroku server, which is harder to debug)
  • Database

    • MongoDB version: 3.6.12
    • Storage engine: Not sure
    • Hardware: Not sure
    • Localhost or remote server? (AWS, mLab, ObjectRocket, Digital Ocean, etc): Mlab
@davimacedo
Copy link
Member

Have you also tried the enableSingleSchemaCache option?

@danielchangsoojones
Copy link
Author

Yes, we tried that. The true fix was to just spin up Heroku Redis and then just update the redis cache adapter. Then the memory issue just goes away. Was pretty easy to do honestly. Although, it seems a little silly since parse server should work right out of the box without memory issues.

@dplewis
Copy link
Member

dplewis commented Feb 13, 2020

@danielchangsoojones We had this discussion previously and I came up with a solution. #6193 (comment)

Its on my todo list, I'll try to get to it before the end of the month.

Although, it seems a little silly since parse server should work right out of the box without memory issues.

It did work out of the box without memory issues for a single instance. What about multiple instances running against the same DB? This created an issue of over validating the schema. Just a quick history lesson.

@stale
Copy link

stale bot commented Nov 8, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 8, 2020
@mtrezza mtrezza removed the stale label Nov 8, 2020
@mtrezza mtrezza added type:bug Impaired feature or lacking behavior that is likely assumed and removed 🧬 enhancement labels Mar 11, 2021
@mtrezza
Copy link
Member

mtrezza commented Mar 16, 2021

@danielchangsoojones It would be great if you could try the new master branch and confirm whether #7214 solves the memory issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:bug Impaired feature or lacking behavior that is likely assumed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants