-
Notifications
You must be signed in to change notification settings - Fork 28.5k
[SQL] SPARK-1964 Add timestamp to hive metastore type parser. #913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
Merging this into master & branch-1.0. Thanks! |
pdeyhim
pushed a commit
to pdeyhim/spark-1
that referenced
this pull request
Jun 25, 2014
Author: Michael Armbrust <michael@databricks.com> Closes apache#913 from marmbrus/timestampMetastore and squashes the following commits: 8e0154f [Michael Armbrust] Add timestamp to hive metastore type parser.
xiliu82
pushed a commit
to xiliu82/spark
that referenced
this pull request
Sep 4, 2014
Author: Michael Armbrust <michael@databricks.com> Closes apache#913 from marmbrus/timestampMetastore and squashes the following commits: 8e0154f [Michael Armbrust] Add timestamp to hive metastore type parser.
agirish
pushed a commit
to HPEEzmeral/apache-spark
that referenced
this pull request
May 5, 2022
…pache#913) * MapR [SPARK-953] Investigate and add all needed changes for Spark services (apache#905) * [EZSPA-347] Find a way to pass sensitive configs in secure manner (apache#907) * MapR [SPARK-961] Spark job can't be properly killed using yarn API or CLI (apache#908) * MapR [SPARK-962] MSSQL can not handle SQL syntax which is used in Spark (apache#909) * MapR [SPARK-963] select from hbase table which was created via hive fails (apache#910) Co-authored-by: Dmitry Popkov <91957973+d-popkov@users.noreply.github.com> Co-authored-by: Andrew Khalymon <andrew.khalymon@hpe.com>
wangyum
pushed a commit
that referenced
this pull request
May 26, 2023
* [SPARK-36992][SQL] Improve byte array sort perf by unify getPrefix function of UTF8String and ByteArray ### What changes were proposed in this pull request? Unify the getPrefix function of `UTF8String` and `ByteArray`. ### Why are the changes needed? When execute sort operator, we first compare the prefix. However the getPrefix function of byte array is slow. We use first 8 bytes as the prefix, so at most we will call 8 times with `Platform.getByte` which is slower than call once with `Platform.getInt` or `Platform.getLong`. ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? pass `org.apache.spark.util.collection.unsafe.sort.PrefixComparatorsSuite` Closes #34267 from ulysses-you/binary-prefix. Authored-by: ulysses-you <ulyssesyou18@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com> * [SPARK-37037][SQL] Improve byte array sort by unify compareTo function of UTF8String and ByteArray ### What changes were proposed in this pull request? Unify the compare function of `UTF8String` and `ByteArray`. ### Why are the changes needed? `BinaryType` use `TypeUtils.compareBinary` to compare two byte array, however it's slow since it compares byte array using unsigned int comparison byte by bye. We can compare them using `Platform.getLong` with unsigned long comparison if they have more than 8 bytes. And here is some histroy about this `TODO` https://github.com/apache/spark/pull/6755/files#r32197461 The benchmark result should be same with `UTF8String`, can be found in #19180 (#19180 (comment)) ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Move test from `TypeUtilsSuite` to `ByteArraySuite` Closes #34310 from ulysses-you/SPARK-37037. Authored-by: ulysses-you <ulyssesyou18@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com> * [SPARK-37341][SQL] Avoid unnecessary buffer and copy in full outer sort merge join ### What changes were proposed in this pull request? FULL OUTER sort merge join (non-code-gen path) [copies join keys and buffers input rows, even when rows from both sides do not have matched keys](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala#L1637-L1641). This is unnecessary, as we can just output the row with smaller join keys, and only buffer when both sides have matched keys. This would save us from unnecessary copy and buffer, when both join sides have a lot of rows not matched with each other. ### Why are the changes needed? Improve query performance for FULL OUTER sort merge join when code-gen is disabled. This would benefit query when both sides have a lot of rows not matched, and join key is big in terms of size (e.g. string type). Example micro benchmark: ``` def sortMergeJoin(): Unit = { val N = 2 << 20 codegenBenchmark("sort merge join", N) { val df1 = spark.range(N).selectExpr(s"cast(id * 15485863 as string) as k1") val df2 = spark.range(N).selectExpr(s"cast(id * 15485867 as string) as k2") val df = df1.join(df2, col("k1") === col("k2"), "full_outer") assert(df.queryExecution.sparkPlan.find(_.isInstanceOf[SortMergeJoinExec]).isDefined) df.noop() } } ``` Seeing run-time improvement over 60%: ``` Running benchmark: sort merge join Running case: sort merge join without optimization Stopped after 5 iterations, 10026 ms Running case: sort merge join with optimization Stopped after 5 iterations, 5954 ms Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13 on Mac OS X 10.16 Intel(R) Core(TM) i9-9980HK CPU 2.40GHz sort merge join: Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per Row(ns) Relative ------------------------------------------------------------------------------------------------------------------------ sort merge join without optimization 1807 2005 157 1.2 861.4 1.0X sort merge join with optimization 1135 1191 62 1.8 541.1 1.6X ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Existing unit tests e.g. `OuterJoinSuite.scala`. Closes #34612 from c21/smj-fix. Authored-by: Cheng Su <chengsu@fb.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> * [SPARK-37447][SQL] Cache LogicalPlan.isStreaming() result in a lazy val ### What changes were proposed in this pull request? This PR adds caching to `LogicalPlan.isStreaming()`: the default implementation's result will now be cached in a `private lazy val`. ### Why are the changes needed? This improves the performance of the `DeduplicateRelations` analyzer rule. The default implementation of `isStreaming` recursively visits every node in the tree. `DeduplicateRelations.renewDuplicatedRelations` is recursively invoked on every node in the tree and each invocation calls `isStreaming`. This leads to `O(n^2)` invocations of `isStreaming` on leaf nodes. Caching `isStreaming` avoids this performance problem. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Correctness should be covered by existing tests. This significantly improved `DeduplicateRelations` performance in local microbenchmarking with large query plans (~20% reduction in that rule's runtime in one of my tests). Closes #34691 from JoshRosen/cache-LogicalPlan.isStreaming. Authored-by: Josh Rosen <joshrosen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> * [SPARK-37530][CORE] Spark reads many paths very slow though newAPIHadoopFile ### What changes were proposed in this pull request? Same as #18441, we parallelize FileInputFormat.listStatus for newAPIHadoopFile ### Why are the changes needed?  Spark can be slow when accessing external storage at driver side, improve perf by parallelizing ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? passing GA Closes #34792 from yaooqinn/SPARK-37530. Authored-by: Kent Yao <yao@apache.org> Signed-off-by: Kent Yao <yao@apache.org> * [SPARK-37592][SQL] Improve performance of `JoinSelection` When I reading the implement of AQE, I find the process select join with hint exists a lot cumbersome code. The join hint has a relatively high learning curve for users, so the SQL not contains join hint in more cases. Improve performance of `JoinSelection` 'No'. Just change the inner implement. Jenkins test. Closes #34844 from beliefer/SPARK-37592-new. Authored-by: Jiaan Geng <beliefer@163.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> * [SPARK-37646][SQL] Avoid touching Scala reflection APIs in the lit function ### What changes were proposed in this pull request? This PR proposes to avoid touching Scala reflection APIs in the lit function. ### Why are the changes needed? Currently `lit` calls `typedlit[Any]` and touches Scala reflection APIs unnecessarily. As Scala reflection APIs touch multiple global locks and they are pretty slow when the parallelism is pretty high. This PR inlines `typedlit` to `lit` and replaces `Literal.create` with `Literal.apply` to avoid touching Scala reflection APIs. There is no behavior change. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - New unit tests. - Manually ran the test in https://issues.apache.org/jira/browse/SPARK-37646 and saw no difference between `new Column(Literal(0L))` and `lit(0L)`. Closes #34901 from zsxwing/SPARK-37646. Lead-authored-by: Shixiong Zhu <zsxwing@gmail.com> Co-authored-by: Shixiong Zhu <shixiong@databricks.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> * [SPARK-37689][SQL] Expand should be supported in PropagateEmptyRelation We meet a case that when there is a empty relation, HashAggregateExec still triggered to execute and return an empty result. It's not necessary.  It's caused by there is an `Expand(EmptyLocalRelation())`, and it's not propagated, this pr support propagate `Expand` with empty LocalRelation Avoid unnecessary execution. No Added UT Closes #34954 from AngersZhuuuu/SPARK-37689. Authored-by: Angerszhuuuu <angers.zhu@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> * [SPARK-36406][CORE] Avoid unnecessary file operations before delete a write failed file held by DiskBlockObjectWriter We always do file truncate operation before delete a write failed file held by `DiskBlockObjectWriter`, a typical process is as follows: ``` if (!success) { // This code path only happens if an exception was thrown above before we set success; // close our stuff and let the exception be thrown further writer.revertPartialWritesAndClose() if (file.exists()) { if (!file.delete()) { logWarning(s"Error deleting ${file}") } } } ``` The `revertPartialWritesAndClose` method will reverts writes that haven't been committed yet, but it doesn't seem necessary in the current scene. So this pr add a new method to `DiskBlockObjectWriter` named `closeAndDelete()`, the new method just revert write metrics and delete the write failed file. Avoid unnecessary file operations. Add a new method to `DiskBlockObjectWriter` named `closeAndDelete(). Pass the Jenkins or GitHub Action Closes #33628 from LuciferYang/SPARK-36406. Authored-by: yangjie01 <yangjie01@baidu.com> Signed-off-by: attilapiros <piros.attila.zsolt@gmail.com> * [SPARK-37462][CORE] Avoid unnecessary calculating the number of outstanding fetch requests and RPCS Avoid unnecessary calculating the number of outstanding fetch requests and RPCS It is unnecessary to calculate the number of outstanding fetch requests and RPCS when the IdleStateEvent is not IDLE or the last request is not timeout. No. Exist unittests. Closes #34711 from weixiuli/SPARK-37462. Authored-by: weixiuli <weixiuli@jd.com> Signed-off-by: Sean Owen <srowen@gmail.com> Co-authored-by: ulysses-you <ulyssesyou18@gmail.com> Co-authored-by: Cheng Su <chengsu@fb.com> Co-authored-by: Josh Rosen <joshrosen@databricks.com> Co-authored-by: Kent Yao <yao@apache.org> Co-authored-by: Jiaan Geng <beliefer@163.com> Co-authored-by: Shixiong Zhu <zsxwing@gmail.com> Co-authored-by: Shixiong Zhu <shixiong@databricks.com> Co-authored-by: Angerszhuuuu <angers.zhu@gmail.com> Co-authored-by: yangjie01 <yangjie01@baidu.com> Co-authored-by: weixiuli <weixiuli@jd.com>
udaynpusa
pushed a commit
to mapr/spark
that referenced
this pull request
Jan 30, 2024
…pache#913) * MapR [SPARK-953] Investigate and add all needed changes for Spark services (apache#905) * [EZSPA-347] Find a way to pass sensitive configs in secure manner (apache#907) * MapR [SPARK-961] Spark job can't be properly killed using yarn API or CLI (apache#908) * MapR [SPARK-962] MSSQL can not handle SQL syntax which is used in Spark (apache#909) * MapR [SPARK-963] select from hbase table which was created via hive fails (apache#910) Co-authored-by: Dmitry Popkov <91957973+d-popkov@users.noreply.github.com> Co-authored-by: Andrew Khalymon <andrew.khalymon@hpe.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.