Description
🚨 As of Node 21.1.0, this has been fixed upstream. Hopefully the fixes will be backported to v18 and v20 as well (as of writing (Oct. 26 2023) they have not. Node 18.20.0 and Node 20.10.0 have the backports), but that is up to the Node.js project and nothing we control from here. Note that (native) ESM still has memory leaks - that can be tracked here: #14605. If you're unable to upgrade your version of Node, you can use --workerIdleMemoryLimit
in Jest 29 and later. See https://jestjs.io/docs/configuration/#workeridlememorylimit-numberstring 🚨
Version
27.0.6
Steps to reproduce
- Install the latest Node JS (16.11.0 or later) or use the appropriate Docker image
- Set up a project with a multiplicity Jest tests
- Run
node --expose-gc node_modules/.bin/jest --logHeapUsage
and see how the memory consumption starts increasing.
Expected behavior
Since Jest calls global.gc()
when Garbage Collector is exposed and --logHeapUsage
flag is present, the memory usage should be stable.
Actual behavior
The memory usage increases with every new test
Additional context
We had some issues with Jest workers consuming all available RAM both on CI machine and locally.
After doing some research, we found that if we run Jest like the following node --expose-gc node_modules/.bin/jest --logHeapUsage
, the heap size remains stable. After upgrading to Node JS v16.11.0, the issue was back. Node v16.10.0 works fine. I believe it was something accidentally introduced in the new Node, but it might be useful to take a look at this from Jest perspective in search of possible workarounds.
I'm also having the same behavior on my working machine, environment of which I'm pasting below 👇🏻
Environment
System:
OS: macOS 11.6
CPU: (8) x64 Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
Binaries:
Node: 16.11.0 - ~/.nvm/versions/node/v16.11.0/bin/node
Yarn: 1.22.0 - ~/SomeFancyDir/webapp/node_modules/.bin/yarn
npm: 8.0.0 - ~/.nvm/versions/node/v16.11.0/bin/npm
npmPackages:
jest: 27.0.6 => 27.0.6
Activity
rthreei commentedon Nov 9, 2021
We're also experiencing this issue. Node 16.11+ and Jest v27 consumes significantly more memory. Node 16.10 and Jest v27 seems OK.
blimmer commentedon Nov 12, 2021
We're also unable to update to the LTS version of Node 16 (at the time of writing 16.13.0) because of this issue. We bisected the changes and identified that the upgrade from Node 16.10 to 16.11 caused our large Jest suite to hang indefinitely.
I took a look at the Node 16.11 changelog and I think the most likely culprit for this issue comes from the V8 update to 9.4 (PR). In V8 9.4, the new Sparkplug compiler is enabled by default (see also this Node issue).
I was hoping I could try disabling sparkplug to see verify that this is the issue.
node
exposes a V8 option to disable (--no-sparkplug
), but I don't think it's passing through to the Jest workers when I call it like this:I also tried setting the V8 option in
jest-environment-node
here: https://github.com/facebook/jest/blob/42b020f2931ac04820521cc8037b7c430eb2fa2f/packages/jest-environment-node/src/index.ts#L109 viabut I didn't see any change. I'm not sure if that means Sparkplug isn't causing the problem or if I'm not setting the V8 flag properly in the jest workers.
@SimenB - I see you've committed a good deal to
jest-environment-node
- any tips for how I might pass that V8 flag down through all the workers? If it's possible (even viapatch-pacakge
or something) I'd be happy to give it a shot on our test suite that's exhibiting the problem.So I'm not exactly positive that this is the cause of the issue, but it seems like a potentially promising place to start looking.
SimenB commentedon Nov 12, 2021
I would have thought https://github.com/facebook/jest/blob/e0b33b74b5afd738edc183858b5c34053cfc26dd/packages/jest-worker/src/workers/ChildProcessWorker.ts#L93-L94 made it so it was passed down...
blimmer commentedon Nov 12, 2021
It's possible the
sparkplug
thing is a red herring. Nothing else jumped out at me from that changelog.I captured some heap size metrics to show the scale of the difference.
Node 16.10.0 (baseline)
Node 16.11.0 (problem introduced)
I'll log out the code you posted to make sure it's being passed down.
blimmer commentedon Nov 12, 2021
Yep, @SimenB you were correct - the
--no-sparkplug
flag does appear to be making it down to the workers. Thanks for pointing me to that code.@EternallLight and @rthreei - is there anything about your codebase that's notable to potentially cause this issue? The only thing I can think of on my end is that we make heavy use of
async_hooks
and have historically had issues upgrading in the 16.x series related to them.I'm sure the jest team would love to have some kind of reproducibility to try to look more closely at this, but I'm struggling to develop a small reproducible case.
rthreei commentedon Nov 12, 2021
@blimmer nothing in particular is notable about our codebase. It's not a small codebase, but not very big either. Prior to 16.11, there were known memory leaks when running test suite (potentially around lodash); maybe made worse by 16.11.
SimenB commentedon Nov 15, 2021
If it's specifically in 16.11, you can probably try to build node yourself and bisect https://github.com/nodejs/node/compare/v16.10.0..v16.11.0. Figuring out which commit that introduced it might help understand when one (or more) of your code, Node and Jest does wrong 🙂
pustovalov commentedon Nov 15, 2021
Looks like in new version of node, garbage collector will no longer clear memory until it reaches the limit.
RAM: 4GB
Node v14.17.3
Max memory usage:
full log:
https://gist.github.com/pustovalov/ad8bcd84b0b6bd6abec301982e03ed55
Node v16.13.0
Max memory usage:
full log:
https://gist.github.com/pustovalov/01cc1f484566f4b692fb405335b5f78d
Node v16.13.0 with
max-old-space-size
based on how much memory the process was using when it used node v14, set a limit in
700mb
Max memory usage:
full log:
https://gist.github.com/pustovalov/a86545aa38708d6dda479ee9a1bc4e1d
If you set a limit that will be equal to the maximum amount of RAM, then the process will crash because the cleaning will begin only when this limit is reached
luiz290788 commentedon Nov 15, 2021
I've tried that. I was able to reproduce the issue on the last commit on v8 upgrade. It looks like the problem is related to v8 upgrade inside of node.
B4nan commentedon Nov 16, 2021
Looks like I am hitting this as well. It happens only when doing coverage (as it eats more memory), and it ends up with hard failure. Locking node version to 16.10 helps, node 17 fails the same. Fails locally too.
Maybe one interesting note, it started to happen after I upgraded node-mongodb to v4, failed pipeline here.
468 remaining items