You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Without personal insight into the finer details of the existing benchmarks as it compares apples-to-apples, this still looks compelling to at least consider. When I searched I was pleased to find concurrent thinking!
I can't into Java-ing, but I did read about zstd this weekend they're puttin' up Pied Piper numbers, Weissman score in the fives:
For reference, several fast compression algorithms were tested and compared
on a desktop running Ubuntu 20.04 (Linux 5.11.0-41-generic),
with a Core i7-9700K CPU @ 4.9GHz,
using lzbench, an open-source in-memory benchmark by @inikep
compiled with gcc 9.3.0,
on the Silesia compression corpus.
The negative compression levels, specified with --fast=#,
offer faster compression and decompression speed
at the cost of compression ratio (compared to level 1).
Zstd can also offer stronger compression ratios at the cost of compression speed.
Speed vs Compression trade-off is configurable by small increments.
Decompression speed is preserved and remains roughly the same at all settings,
a property shared by most LZ compression algorithms, such as zlib or lzma.
Gzip compression in the standard java libraries is too slow for reconnect. Consider alternate implementations that are faster.
The text was updated successfully, but these errors were encountered: