Open
Description
Version
v17.2.0
Platform
20.6.0 Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64 x86_64
Subsystem
worker_threads
What steps will reproduce the bug?
index.mjs
import http from 'http';
import cluster from 'cluster';
import path from 'path';
import { Worker } from 'worker_threads';
import v8 from 'v8';
if (cluster.isPrimary) {
const workersEnv = {
NODE_OPTIONS: `--max-old-space-size=512`
};
cluster.fork(workersEnv);
const worker = new Worker('./worker.mjs', {
resourceLimits: {
maxOldGenerationSizeMb: 200,
}
});
worker.postMessage('main');
} else {
const worker = new Worker('./worker.mjs', {
resourceLimits: {
maxOldGenerationSizeMb: 200,
}
});
worker.postMessage('worker');
}
worker.mjs
import v8 from 'v8';
import { parentPort } from 'worker_threads';
parentPort.on('message', (type) => {
const { heap_size_limit } = v8.getHeapStatistics();
console.log(`[${type}] heap_size_limit: ${heap_size_limit}`);
});
output:
[main] heap_size_limit: 260046848
[worker] heap_size_limit: 587202560
How often does it reproduce? Is there a required condition?
Whenever I use worker_threads
module with a cluster
- worker resourceLimits option is ignored.
What is the expected behavior?
Expected output:
[main] heap_size_limit: 260046848
[worker] heap_size_limit: 260046848
What do you see instead?
Max old space heap size probably was inherited from parent process (cluster) and resource limits option was ignored.
Additional information
This behavior may cause the memory leak on the worker to be handled incorrectly. Worker process will be killed if old heap space reach cluster process limits, not options I've passed.