You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 9, 2024. It is now read-only.
I performed a few benchmarks to see how much faster lz4 would be compared to gzip when decompressing an incoming HTTP payload. I'm not sure if I'm doing something wrong, but it seems that this library is a lot slower than just using plain gzip.
I had 3 test JSON docs of different sizes, that I compressed with both LZ4 and gzip:
Uncompressed
LZ4
gzip
7,369 bytes
3,975 bytes
2,772 bytes
73,723 bytes
33,028 bytes
21,790 bytes
716,995 bytes
311,365 bytes
202,697 bytes
The LZ4 version was compressed using default options:
The gzip version was compressed from my macOS command line:
gzip uncompressed.json
I used autocannon to hammer a test HTTP server with the compressed document. The server would decompress the payload but otherwise discard it afterwards.
Here's an example of how autocannon was configured:
'use strict'consthttp=require('http')constzlib=require('zlib')constlz4=require('lz4')constserver=http.createServer(function(req,res){constenc=req.headers['content-encoding']||''letdecompressedif(/\bgzip\b/.test(enc)){decompressed=req.pipe(zlib.createGunzip())}elseif(/\blz4\b/.test(enc)){decompressed=req.pipe(lz4.createDecoderStream())}else{decompressed=req}constbuffers=[]decompressed.on('data',buffers.push.bind(buffers))decompressed.on('end',function(){constdata=Buffer.concat(buffers)res.end()})})server.listen(3000,function(){console.log('Server listening on http://localhost:3000')})
Test 1 - Decompressing a 7,369 byte JSON document
LZ4 (3,975 bytes):
Stat Avg Stdev Max
Latency (ms) 23.29 9.31 61.38
Req/Sec 419.7 13.36 434
Bytes/Sec 41.4 kB 1.31 kB 43 kB
4k requests in 10s, 416 kB read
Gzip (2,772 bytes):
Stat Avg Stdev Max
Latency (ms) 1.07 0.67 13.48
Req/Sec 7064.4 704.93 7733
Bytes/Sec 699 kB 67.8 kB 766 kB
71k requests in 10s, 6.99 MB read
Test 2 - Decompressing a 73,723 byte JSON document
LZ4 (33,028 bytes):
Stat Avg Stdev Max
Latency (ms) 23.28 8.94 55.9
Req/Sec 419.8 11.45 435
Bytes/Sec 41.8 kB 1.1 kB 43.1 kB
4k requests in 10s, 416 kB read
Gzip (21,790 bytes):
Stat Avg Stdev Max
Latency (ms) 2.7 1.61 21.23
Req/Sec 3131 105.16 3342
Bytes/Sec 313 kB 13.1 kB 331 kB
31k requests in 10s, 3.1 MB read
Test 3 - Decompressing a 716,995 byte JSON document
On a large document like this the difference between gzip and lz4 is much smaller, but gzip still wins:
LZ4 (311,365 bytes):
Stat Avg Stdev Max
Latency (ms) 41.56 13.21 102
Req/Sec 237.6 6.95 250
Bytes/Sec 23.7 kB 819 B 24.8 kB
2k requests in 10s, 235 kB read
Gzip (202,697 bytes):
Stat Avg Stdev Max
Latency (ms) 26.11 6.51 137.09
Req/Sec 375.4 7.61 381
Bytes/Sec 37.5 kB 819 B 37.7 kB
4k requests in 10s, 372 kB read
The text was updated successfully, but these errors were encountered:
I performed a few benchmarks to see how much faster lz4 would be compared to gzip when decompressing an incoming HTTP payload. I'm not sure if I'm doing something wrong, but it seems that this library is a lot slower than just using plain gzip.
I had 3 test JSON docs of different sizes, that I compressed with both LZ4 and gzip:
The LZ4 version was compressed using default options:
The gzip version was compressed from my macOS command line:
I used autocannon to hammer a test HTTP server with the compressed document. The server would decompress the payload but otherwise discard it afterwards.
Here's an example of how autocannon was configured:
And here's my test server running on localhost:
Test 1 - Decompressing a 7,369 byte JSON document
LZ4 (3,975 bytes):
Gzip (2,772 bytes):
Test 2 - Decompressing a 73,723 byte JSON document
LZ4 (33,028 bytes):
Gzip (21,790 bytes):
Test 3 - Decompressing a 716,995 byte JSON document
On a large document like this the difference between gzip and lz4 is much smaller, but gzip still wins:
LZ4 (311,365 bytes):
Gzip (202,697 bytes):
The text was updated successfully, but these errors were encountered: