You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1) On the quick start page provide a direct link to the downloads (suggested by @pbailis).
2) On the index page, don't suggest users always have to build Spark, since many won't.
Author: Patrick Wendell <pwendell@gmail.com>
Closes#662 from pwendell/quick-start and squashes the following commits:
0622f27 [Patrick Wendell] Fix two download suggestions in the docs:
Copy file name to clipboardExpand all lines: docs/index.md
+10-26Lines changed: 10 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,17 +9,18 @@ It also supports a rich set of higher-level tools including [Shark](http://shark
9
9
10
10
# Downloading
11
11
12
-
Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
12
+
Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads page
13
+
contains Spark packages for many popular HDFS versions. If you'd like to build Spark from
14
+
scratch, visit the [building with Maven](building-with-maven.html) page.
13
15
14
-
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation.
16
+
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is
17
+
to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable
18
+
pointing to a Java installation.
15
19
16
-
# Building
17
-
18
-
Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with it. To compile the code, go into the top-level Spark directory and run
19
-
20
-
sbt/sbt assembly
21
-
22
-
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
20
+
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}.
21
+
If you write applications in Scala, you will need to use a compatible Scala version
22
+
(e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the
23
+
right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
23
24
24
25
# Running the Examples and Shell
25
26
@@ -50,23 +51,6 @@ options for deployment:
50
51
*[Apache Mesos](running-on-mesos.html)
51
52
*[Hadoop YARN](running-on-yarn.html)
52
53
53
-
# A Note About Hadoop Versions
54
-
55
-
Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
56
-
storage systems. Because the HDFS protocol has changed in different versions of
57
-
Hadoop, you must build Spark against the same version that your cluster uses.
58
-
By default, Spark links to Hadoop 1.0.4. You can change this by setting the
59
-
`SPARK_HADOOP_VERSION` variable when compiling:
60
-
61
-
SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly
62
-
63
-
In addition, if you wish to run Spark on [YARN](running-on-yarn.html), set
Copy file name to clipboardExpand all lines: docs/quick-start.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,9 @@ title: Quick Start
9
9
This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's interactive Scala shell (don't worry if you don't know Scala -- you will not need much for this), then show how to write standalone applications in Scala, Java, and Python.
10
10
See the [programming guide](scala-programming-guide.html) for a more complete reference.
11
11
12
-
To follow along with this guide, you only need to have successfully built Spark on one machine. Simply go into your Spark directory and run:
13
-
14
-
{% highlight bash %}
15
-
$ sbt/sbt assembly
16
-
{% endhighlight %}
12
+
To follow along with this guide, first download a packaged release of Spark from the
13
+
[Spark website](http://spark.apache.org/downloads.html). Since we won't be using HDFS,
14
+
you can download a package for any version of Hadoop.
0 commit comments