Skip to content

Conversation

@jayantsing-db
Copy link
Collaborator

Description

This PR introduces lazy loading support for inline Arrow results to improve memory efficiency when handling large result sets.

Previously, InlineChunkProvider would eagerly fetch all arrow batches upfront when results had hasMoreRows = true, which could lead to memory issues with large datasets. This change splits the handling into two separate paths:

  1. Lazy path (new): For Thrift-based inline Arrow results (when ARROW_BASED_SET is returned), we now use LazyThriftInlineArrowResult which fetches arrow batches on-demand as the client iterates through rows. This is similar to how LazyThriftResult works for columnar data.
  2. Remote path (existing): For URL-based Arrow results (URL_BASED_SET), we continue using ArrowStreamResult with RemoteChunkProvider which downloads chunks from cloud storage.

The InlineChunkProvider is now only used for SEA results with JSON_ARRAY format and INLINE disposition (contain all data inline {no hasMoreRows flag set}).

This will reduce memory consumption and improve performance when dealing with large inline Arrow result sets similar to #975.

Testing

  • Unit tests
  • Integration tests
  • Manual testing

Additional Notes to the Reviewer

jayantsing-db and others added 2 commits September 30, 2025 14:15
This PR introduces lazy loading support for inline Arrow results to improve memory efficiency when handling large result sets.

Previously, InlineChunkProvider would eagerly fetch all arrow batches upfront when results had hasMoreRows = true, which could lead to memory issues with large datasets. This change splits the handling into two separate paths:
1. Lazy path (new): For Thrift-based inline Arrow results (when ARROW_BASED_SET is returned), we now use LazyThriftInlineArrowResult which fetches arrow batches on-demand as the client iterates through rows. This is similar to how LazyThriftResult works for columnar data.
2. Remote path (existing): For URL-based Arrow results (URL_BASED_SET), we continue using ArrowStreamResult with RemoteChunkProvider which downloads chunks from cloud storage.

The InlineChunkProvider is now only used for SEA results with JSON_ARRAY format and INLINE disposition (contain all data inline {no hasMoreRows flag set}).

This should reduce memory consumption and improve performance when dealing with large inline Arrow result sets.
@jayantsing-db
Copy link
Collaborator Author

I need to make some changes related to JDBC spec around row count because we don't have that data point when lazily fetching the results.

@github-actions
Copy link

This PR has been marked as Stale because it has been open for 30 days with no activity. If you would like the PR to remain open, please remove the stale label or comment on the PR.

@github-actions github-actions bot added the Stale label Oct 31, 2025
@github-actions
Copy link

github-actions bot commented Nov 7, 2025

This PR was closed because it has been inactive for 7 days since being marked as stale.

@github-actions github-actions bot closed this Nov 7, 2025
@jayantsing-db jayantsing-db reopened this Dec 1, 2025
public Object getObject(int columnIndex) throws DatabricksSQLException {
if (isClosed) {
throw new DatabricksSQLException(
"Result is already closed", DatabricksDriverErrorCode.STATEMENT_CLOSED);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add logging in this error cases?


// Check if we've reached the maxRows limit
boolean hasRowLimit = maxRows > 0;
if (hasRowLimit && globalRowIndex + 1 >= maxRows) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so globalRowIndex 0 means 1st row, and maxRows -1 would mean last row.

So, at last row, globalRowIndex +1 = maxRows, which means no more rows. We can just check == instead of >=, any reason you are doing that?

return null;
}

private Schema hiveSchemaToArrowSchema(TTableSchema hiveSchema)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not getting arrowSchema for inline arrow result?

currentChunk.numRows,
currentResponse.hasMoreRows);
} catch (DatabricksSQLException e) {
LOGGER.error("Failed to fetch next arrow chunk: {}", e.getMessage());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may have still loaded the data, can we check and clean that up? Else it can have memory leak here

if (result == null) {
return null;
}
ComplexDataTypeParser parser = new ComplexDataTypeParser();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to create this for every getObject call?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants