-
Notifications
You must be signed in to change notification settings - Fork 913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ARROW] Use KyuubiArrowConveters#toBatchIterator
instead of ArrowConveters#toBatchIterator
#4754
Conversation
.../kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/sql/kyuubi/SparkDatasetHelper.scala
Outdated
Show resolved
Hide resolved
…ark/sql/kyuubi/SparkDatasetHelper.scala
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM (pending CI)
.../kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/sql/kyuubi/SparkDatasetHelper.scala
Show resolved
Hide resolved
.../kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/sql/kyuubi/SparkDatasetHelper.scala
Show resolved
Hide resolved
…ark/sql/kyuubi/SparkDatasetHelper.scala
…ark/sql/kyuubi/SparkDatasetHelper.scala
Codecov Report
@@ Coverage Diff @@
## master #4754 +/- ##
=========================================
Coverage 58.06% 58.07%
Complexity 13 13
=========================================
Files 581 581
Lines 32325 32338 +13
Branches 4308 4311 +3
=========================================
+ Hits 18771 18780 +9
+ Misses 11752 11751 -1
- Partials 1802 1807 +5
... and 8 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
This reverts commit e32311a.
…tead of `ArrowConveters#toBatchIterator` ### _Why are the changes needed?_ to adapt Spark 3.4 the signature of function `ArrowConveters#toBatchIterator` is changed in apache/spark#38618 (since Spark 3.4) Before Spark 3.4: ``` private[sql] def toBatchIterator( rowIter: Iterator[InternalRow], schema: StructType, maxRecordsPerBatch: Int, timeZoneId: String, context: TaskContext): Iterator[Array[Byte]] ``` Spark 3.4 ``` private[sql] def toBatchIterator( rowIter: Iterator[InternalRow], schema: StructType, maxRecordsPerBatch: Long, timeZoneId: String, context: TaskContext): ArrowBatchIterator ``` the return type is changed from `Iterator[Array[Byte]]` to `ArrowBatchIterator` ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [x] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request Closes #4754 from cfmcgrady/arrow-spark34. Closes #4754 a3c58d0 [Fu Chen] fix ci 32704c5 [Fu Chen] Revert "fix ci" e32311a [Fu Chen] fix ci a76af62 [Cheng Pan] Update externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/sql/kyuubi/SparkDatasetHelper.scala 453b6a6 [Cheng Pan] Update externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/sql/kyuubi/SparkDatasetHelper.scala 74a9f7a [Cheng Pan] Update externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/sql/kyuubi/SparkDatasetHelper.scala 4ce5844 [Fu Chen] adapt Spark 3.4 Lead-authored-by: Fu Chen <cfmcgrady@gmail.com> Co-authored-by: Cheng Pan <pan3793@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> (cherry picked from commit d0a7ca4) Signed-off-by: Cheng Pan <chengpan@apache.org>
Thanks, merged to master/1.7 |
Why are the changes needed?
to adapt Spark 3.4
the signature of function
ArrowConveters#toBatchIterator
is changed in apache/spark#38618 (since Spark 3.4)Before Spark 3.4:
Spark 3.4
the return type is changed from
Iterator[Array[Byte]]
toArrowBatchIterator
How was this patch tested?
Add some test cases that check the changes thoroughly including negative and positive cases if possible
Add screenshots for manual tests if appropriate
Run test locally before make a pull request