Skip to content

Commit c3bbad7

Browse files
committed
#13 put spark
fixes #13 fixes #3
1 parent 0849e3c commit c3bbad7

File tree

2 files changed

+4
-8
lines changed

2 files changed

+4
-8
lines changed

Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ RUN rm -rf /var/lib/apt/lists/* && \
2020
apt-get install -y sudo && \
2121
apt-get clean all
2222

23+
RUN curl -s http://d3kbcqa49mib13.cloudfront.net/spark-1.6.2-bin-hadoop2.6.tgz | tar -xz -C /usr/local
24+
RUN mv /usr/local/spark* /usr/local/spark
25+
2326
RUN rm -f conf/zeppelin-env.sh
2427
RUN rm -f conf/zeppelin-site.xml
2528
RUN rm -f conf/interpreter.json

zeppelin-env.sh.template

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,9 @@ export ZEPPELIN_NOTEBOOK_DIR="notebooks" # Where notebook at?
44

55
#### Spark interpreter configuration ####
66

7-
## Use provided spark installation ##
8-
## defining SPARK_HOME makes Zeppelin run spark interpreter process using spark-submit
9-
##
10-
# export SPARK_HOME # (required) When it is defined, load it instead of Zeppelin embedded Spark libraries
7+
export SPARK_HOME=/usr/local/spark
118
# export SPARK_SUBMIT_OPTIONS="--master spark://master:7077" # (optional) extra options to pass to spark submit. eg) "--driver-memory 512M --executor-memory 1G".
129

13-
## Use embedded spark binaries ##
14-
## without SPARK_HOME defined, Zeppelin still able to run spark interpreter process using embedded spark binaries.
15-
## however, it is not encouraged when you can define SPARK_HOME
16-
##
1710
# Options read in YARN client mode
1811
# export HADOOP_CONF_DIR # yarn-site.xml is located in configuration directory in HADOOP_CONF_DIR.
1912
# Pyspark (supported with Spark 1.2.1 and above)

0 commit comments

Comments
 (0)