-
Notifications
You must be signed in to change notification settings - Fork 1
FAQ
It means garbage collector
ECMAScript / Javascript is a garbage collected language, which means you don't have to worry about releasing (and allocating) memory yourself. The garbage collector does this for you
A simple question with a difficult answer. There a lot of good resources available already which try to answer that question, here are a few:
- http://www.jayconrod.com/posts/55/a-tour-of-v8-garbage-collection
- https://v8.dev/blog/free-garbage-collection
- https://medium.com/@_lrlna/garbage-collection-in-v8-an-illustrated-guide-d24a952ee3b8
If you have a page which uses quite a bit of memory which can be normally released by the gc, this release of memory will also be reported by the stats event (of course). Or in other words, its normal for memory usage to be intermittently higher in between non-consecutive gc stat events. This interferes with the memory leak detection which is based on consecutive growing stat events. The useMovingAverage option allows you to smooth out the peak memory usage. Probably the higher the number of useMovingAverage you set the more often a possible memory leak is detected
Try to set useMovingAverage
to a number higher then 0
. This will smooth out intermittent heap usage.
Stats / numbers in red mean the value is lower then the previous stat event (shrinking), stats in green means its higher (growing)
The graph only updates every ~1 seconds, there have probably been multiple gc events in the mean time which on average grow but the ones that are printed shrinks. Check the stats for the number of full gc events. This is also related to useMovingAverage
Probably because your node application is too busy. Even when calling the garbage collector to run, the v8 engine might decide its too busy and postpone it
Probably it isn't really hanging, it can just take veeeery long for a heap dump and subsequent heap diff to finish. In my experience, when your app is using over ~500MB
the heap dump will take too long to wait for it. Try to trigger the gc to run more often. Either programmatically with a hook in your code or by calling the gc by the interrupt. Also you can use useMovingAverage
to try to detect a memory leak quicker.
If that also doesnt work, I am afraid you need to look for an alternative like using node's inspection protocol
As the docs state, if you use gcMetric: true
the graph is only updated once a gc stat event is received. If you want to update the graph more frequently you could trigger the gc by calling the interrupt.
ℹ️ The pid of the process that started the graph is shown in the gc stats. Eg the pid is
1234
when it showsgc(#1234):
On linux you should be able do that as follows:
$ watch -n1 -x -- kill -s SIGUSR2 $PID
where $PID is the pid of your node process
If you only have one node application running, the following should work
$ watch -x -n1 -- kill -s SIGUSR2 `pgrep -f "/[n]ode"`
ℹ️ If kill complains about
invalid signal number
then check which number the SIGUSR2 signal has on your system (withkill -l
, by default its 12) and use that directly:watch -n1 -- kill -s12