-
Notifications
You must be signed in to change notification settings - Fork 27
Implement lazy loading for inline Arrow results #1029
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Implement lazy loading for inline Arrow results #1029
Conversation
This PR introduces lazy loading support for inline Arrow results to improve memory efficiency when handling large result sets.
Previously, InlineChunkProvider would eagerly fetch all arrow batches upfront when results had hasMoreRows = true, which could lead to memory issues with large datasets. This change splits the handling into two separate paths:
1. Lazy path (new): For Thrift-based inline Arrow results (when ARROW_BASED_SET is returned), we now use LazyThriftInlineArrowResult which fetches arrow batches on-demand as the client iterates through rows. This is similar to how LazyThriftResult works for columnar data.
2. Remote path (existing): For URL-based Arrow results (URL_BASED_SET), we continue using ArrowStreamResult with RemoteChunkProvider which downloads chunks from cloud storage.
The InlineChunkProvider is now only used for SEA results with JSON_ARRAY format and INLINE disposition (contain all data inline {no hasMoreRows flag set}).
This should reduce memory consumption and improve performance when dealing with large inline Arrow result sets.
|
I need to make some changes related to JDBC spec around row count because we don't have that data point when lazily fetching the results. |
|
This PR has been marked as Stale because it has been open for 30 days with no activity. If you would like the PR to remain open, please remove the stale label or comment on the PR. |
|
This PR was closed because it has been inactive for 7 days since being marked as stale. |
| public Object getObject(int columnIndex) throws DatabricksSQLException { | ||
| if (isClosed) { | ||
| throw new DatabricksSQLException( | ||
| "Result is already closed", DatabricksDriverErrorCode.STATEMENT_CLOSED); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add logging in this error cases?
|
|
||
| // Check if we've reached the maxRows limit | ||
| boolean hasRowLimit = maxRows > 0; | ||
| if (hasRowLimit && globalRowIndex + 1 >= maxRows) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so globalRowIndex 0 means 1st row, and maxRows -1 would mean last row.
So, at last row, globalRowIndex +1 = maxRows, which means no more rows. We can just check == instead of >=, any reason you are doing that?
| return null; | ||
| } | ||
|
|
||
| private Schema hiveSchemaToArrowSchema(TTableSchema hiveSchema) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not getting arrowSchema for inline arrow result?
| currentChunk.numRows, | ||
| currentResponse.hasMoreRows); | ||
| } catch (DatabricksSQLException e) { | ||
| LOGGER.error("Failed to fetch next arrow chunk: {}", e.getMessage()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we may have still loaded the data, can we check and clean that up? Else it can have memory leak here
| if (result == null) { | ||
| return null; | ||
| } | ||
| ComplexDataTypeParser parser = new ComplexDataTypeParser(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to create this for every getObject call?
Description
This PR introduces lazy loading support for inline Arrow results to improve memory efficiency when handling large result sets.
Previously, InlineChunkProvider would eagerly fetch all arrow batches upfront when results had hasMoreRows = true, which could lead to memory issues with large datasets. This change splits the handling into two separate paths:
The InlineChunkProvider is now only used for SEA results with JSON_ARRAY format and INLINE disposition (contain all data inline {no hasMoreRows flag set}).
This will reduce memory consumption and improve performance when dealing with large inline Arrow result sets similar to #975.
Testing
Additional Notes to the Reviewer