-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate options for improving performance when reading decimals from Parquet #679
Comments
This seemingly points the finger at |
It might be that the assignment to |
Comet first reads the integer value by calling |
EDIT: Oh, it is Scala |
It looks like pure scan is not a problem based on profiling BaselineScan-only enabledScan comparison (the name says add_many_decimals, but I was just lazy to change the name)Now I remember that we need to increase the minimum number of iterations to get stable results. We used to use a remote Linux machine that was stable so 2-3 iterations were fine. But for local testing, we need to increase it. For the third screen shot, I used 33 iterations. |
## Which issue does this PR close? Part of #679 and #670 Related #490 ## Rationale for this change For dictionary decimal vectors, it was unpacking even for Int and Long decimals that used more memory than necessary. ## What changes are included in this PR? Unpack only for Decimal 128 ## How are these changes tested? Existing test
I see different results in profiling. I ran a simple query - What stands out is that the bulk of the time is being spent in the Overall Comet is 0.4x of Spark. I made a change to However I don't know the best way to avoid the Also, with |
## Which issue does this PR close? Part of #679 and #670 ## Rationale for this change The improvement could be negligible in real use cases, but I see some improvements in micro benchmarks ## What changes are included in this PR? Optimizations in some bit functions ## How are these changes tested? Existing tests
We created many fixes. closing for now |
## Which issue does this PR close? Part of apache#679 and apache#670 Related apache#490 ## Rationale for this change For dictionary decimal vectors, it was unpacking even for Int and Long decimals that used more memory than necessary. ## What changes are included in this PR? Unpack only for Decimal 128 ## How are these changes tested? Existing test (cherry picked from commit c1b7c7d)
## Which issue does this PR close? Part of apache#679 and apache#670 ## Rationale for this change The improvement could be negligible in real use cases, but I see some improvements in micro benchmarks ## What changes are included in this PR? Optimizations in some bit functions ## How are these changes tested? Existing tests (cherry picked from commit ffb96c3)
What is the problem the feature request solves?
This issue is for discussing the comment from @parthchandra in #671 (comment).
Describe the potential solution
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: