-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: teach ALPArray to store validity only in the encoded array #2216
Conversation
The patches are now always non-nullable. This required PrimitiveArray::patch to gracefully handle non-nullable patches when the array is nullable. I modified the benchmarks to include patch manipulation time, but notice that the test data has no patches. The benchmarks measure the overhead of `is_valid`. If we had test data where the invalid positions contained exceptional values, I would expect a modest improvement in both decompression and compression time.
This reverts commit f26139f.
finish revert
encodings/alp/src/alp/array.rs
Outdated
vortex_bail!(MismatchedTypes: dtype, patches.dtype()); | ||
} | ||
|
||
if patches.values().validity_mask()?.false_count() != 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling validity_mask here triggers a "canonicalization" of the validity buffer. You should instead use patches.values().all_valid()?
which should short-circuit if possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks.
encodings/alp/Cargo.toml
Outdated
@@ -17,6 +17,7 @@ readme = { workspace = true } | |||
workspace = true | |||
|
|||
[dependencies] | |||
arrow-array = { workspace = true } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is suspect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed cruft. Removed.
encodings/alp/src/alp/compress.rs
Outdated
// exceptional_positions may contain exceptions at invalid positions (which contain garbage | ||
// data). We remove invalid exceptional positions in order to keep the Patches small. | ||
let (valid_exceptional_positions, valid_exceptional_values): (Buffer<u64>, Buffer<T>) = | ||
if n_valid == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can do match validity.boolean_buffer()
to switch over alltrue / allfalse and a buffer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
); | ||
assert_eq!(encoded.exponents(), Exponents { e: 16, f: 13 }); | ||
|
||
let decoded = decompress(encoded).unwrap(); | ||
assert_eq!(values.as_slice(), decoded.as_slice::<f64>()); | ||
} | ||
|
||
#[test] | ||
#[allow(clippy::approx_constant)] // Clippy objects to 2.718, an approximation of e, the base of the natural logarithm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's so funny
This PR trims invalid values from the patches and makes the patches validity either AllValid (for nullable arrays) or NonNullable.
This microbenchmark doesn't reveal any clear improvements or degradations. It seems to me mostly noise. In theory, this change should make decompression a bit faster because validity is one place, but my primary goal here is to make ALP array simpler: validity is in one place, the encoded array.
Benchmarks on latest commit:
parameter is: (number of elements, fraction patched, fraction valid).
Any ratio greater than 1.1 or less than 0.9 has a
***
Benchmarks before reverting to develop's chunking code
[1] Seems like this PR is about the same except for compressing really large f64 arrays. The PR that introduced chunking, #924, reported substantially larger reductions (~5ms of 29ms) in time than this increase of ~1ms (of 17ms).