-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Spark 1.3 #384
Support Spark 1.3 #384
Conversation
You're moving so fast! |
Ready to be merged! Note that implicit conversion from RDD -> DataFrame is not working. case class Person(name:String)
val person = sc.parallelize(List(Person("hello"), Person("world")))
person.registerTempTable("person") // fails The same problem exists in spark-shell, too. |
Great demo! +1 for merge. On 2015년 3월 14일 (토) 11:50 Lee moon soo notifications@github.com wrote:
|
+1 will be test driving 1.3 this week |
According to https://spark.apache.org/docs/latest/sql-programming-guide.html#starting-point-sqlcontext, HiveContext is preferable one than using SparkContext. I pushed one more change that if HiveContext (hive related dependency is loaded) is available, use it instead of SparkContext |
@Leemoonsoo have you tried:
|
@felixcheung I am with you, It would be better to leave it up to the user to make the choice. I personally use the CassandraSQLContext |
@syepes Thanks for letting know a way to registerTempTable. https://github.com/apache/spark/blob/v1.3.0/repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkILoop.scala#L1022 Then, @felixcheung, @syepes, how about bringing zeppelin.spark.useHiveContext property back with default value 'true'? Which is just removed in this PullRequest. Previously, default value was 'false' |
@Leemoonsoo No problem, the usage of the useHiveContext is a good alternative. Thanks for the work on 1.3 |
@Leemoonsoo sounds good to me too. |
@Leemoonsoo I've tried to execute your very exact same example, however it appears the df val is not passed along to the sql editor and I get the message I'm running running spark standalone with 2 workers. Spark version 1.3.0 on Ubuntu 14.04 LTS. Any idea what causes the problem? |
@geekflyer
|
|
+1 for merge |
@Leemoonsoo Thanks for you help. Now it works completely fine :-) |
Linked **[JIRA]** [JIRA]: https://issues.apache.org/jira/browse/ZEPPELIN-382?jql=project%20%3D%20ZEPPELIN Author: DuyHai DOAN <doanduyhai@gmail.com> Closes ZEPL#384 from doanduyhai/CassandraInterpreterDocumentation and squashes the following commits: b0bf36a [DuyHai DOAN] [ZEPPELIN-382] Add Documentation for Cassandra interpreter in the doc pages
Spark 1.3 is released.
This PR make Zeppelin work with Spark 1.3.