ZPAQ compression improves with ability to send large blocks to backend #120
Closed
pete4abw
started this conversation in
Show and tell
Replies: 2 comments
-
Pushed changes. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Branch zpaq_bs_fix has been removed. Merged. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
ZPAQ backend compressor now accepts blocks that are larger than the given block size. There had been crashes when trying to send anything larger. Now, this appears to be resolved, but at the expense of speed and memory use! See the zpaq_bs_fix branch and try it out!
There was really one major change that took me quite some years to work out! That is the making use of the
compressBlock
function directly and calling it from thezpaq_compress
function. It will send the entire block to compress to theStringBuffer
and then try and free up some ram.In addition, since I still can't get progress to print on compression, I added a new message line for ZPAQ to show when a thread returns from compression.
Since the 7.15 SDK of ZPAQ has 5 compression modes and the 5.0 SDK (used by
lrzip
) has only 3, direct comparisons are difficult. Basically the compression models used inlrzip
equate roughly to ZPAQ levels 3,4, and 5 in the current SDK. Compression at the higher levels are better because the rzip preprocessor can has more data to hash prior to sending the blocks of data to the backend. There remains an issue with memory use and a lot of swap space gets used, but that's another problem for another day!Example:
Source file: usrbin.tar, 1,681,039,360
Download, test, and comments welcome.
Beta Was this translation helpful? Give feedback.
All reactions