This repository has been archived by the owner on Apr 22, 2023. It is now read-only.
This repository has been archived by the owner on Apr 22, 2023. It is now read-only.
Closed
Description
I realized this while reviewing changes I want to make discussed in the mailing list.
Fast Buffers have a performance advantage since they are allocated from a single larger buffer. But that means if there is even one byte hanging around, the entire SlowBuffer
must hang around in memory. The following is a script that will prove the point:
// WARNING: This script will crash your computer!
// ran this with `./out/Release/node --expose-gc mem-test.js
const SIZE = Buffer.poolSize - 2;
// if you want to test this w/o crashing your computer, reduce ITTR
const ITTR = 1e6;
var SlowBuffer = require('buffer').SlowBuffer;
var store = [];
var rss, tmp, i;
rss = process.memoryUsage().rss;
for (i = 0; i < ITTR; i++) {
// was using this as a control
//tmp = SlowBuffer(SIZE);
store.push(Buffer(1));
// comment out the following and you'll see that the problem no
// longer exists
tmp = Buffer(SIZE);
if (i % 1e3 === 0) gc();
}
setTimeout(function() {
var nr = process.memoryUsage().rss
console.log(((nr - rss) / 1024 / 1024).toFixed(2));
}, 1000);
Basically every loop will allocate
Buffer.poolSize
memory instead of just 1. I'll write an http example later that exhibits the same behavior.
Metadata
Assignees
Labels
No labels
Activity