-
Notifications
You must be signed in to change notification settings - Fork 28.7k
[SPARK-50137][HIVE] Avoid fallback to Hive-incompatible ways when table creation fails by thrift exception #48668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…le creation fails by thrift exception
@LuciferYang @wangyum @sunchao could you please review this PR? |
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
@LuciferYang @sunchao Could you please review this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable to me
Thanks, merged to master |
test("SPARK-50137: Avoid fallback to Hive-incompatible ways on thrift exception") { | ||
val hadoopConf = new Configuration() | ||
// Use an unavailable uri to mock client connection timeout. | ||
hadoopConf.set("hive.metastore.uris", "thrift://1.1.1.1:1111") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1.1.1.1 is the valid DNS server of Cloudflare
like Google's 8.8.8.8.
Given that Apache Spark test cases are running massively (not only in Apache Spark repo, but for all PRs and downstream forks), I'd recommend to revert this commit first if we don't have a quick follow-up immediately, @sunchao .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for checking this @dongjoon-hyun - yes please revert for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I reverted here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing that out. Change the real ip to a reserved address 192.0.2.1
in #50985
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for fixing, @wecharyu .
…le creation fails by thrift exception ### What changes were proposed in this pull request? Enhance the datasource table creation, do not fallback to hive incompatible way if failure was caused by thrift exception. ### Why are the changes needed? Thrift exception is unrelated to hive compatibility of datasource table, so fallback is unnecessary in this case. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Add a unit test: ```bash mvn test -Dtest=none -Dsuites="org.apache.spark.sql.hive.HiveExternalCatalogSuite SPARK-50137: Avoid fallback to Hive-incompatible ways on thrift exception" -pl :spark-hive_2.13 ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#48668 from wecharyu/SPARK-50137. Authored-by: wecharyu <yuwq1996@gmail.com> Signed-off-by: Chao Sun <chao@openai.com>
What changes were proposed in this pull request?
Enhance the datasource table creation, do not fallback to hive incompatible way if failure was caused by thrift exception.
Why are the changes needed?
Thrift exception is unrelated to hive compatibility of datasource table, so fallback is unnecessary in this case.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Add a unit test:
Was this patch authored or co-authored using generative AI tooling?
No.