Skip to content

Commit 0622f27

Browse files
committed
Fix two download suggestions in the docs:
1) On the quick start page provide a direct link to the downloads. 2) On the index page, don't suggest users always have to build Spark (many won't).
1 parent a2262cd commit 0622f27

File tree

2 files changed

+13
-31
lines changed

2 files changed

+13
-31
lines changed

docs/index.md

Lines changed: 10 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,18 @@ It also supports a rich set of higher-level tools including [Shark](http://shark
99

1010
# Downloading
1111

12-
Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
12+
Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads page
13+
contains Spark packages for many popular HDFS versions. If you'd like to build Spark from
14+
scratch, visit the [building with Maven](building-with-maven.html) page.
1315

14-
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation.
16+
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is
17+
to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable
18+
pointing to a Java installation.
1519

16-
# Building
17-
18-
Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with it. To compile the code, go into the top-level Spark directory and run
19-
20-
sbt/sbt assembly
21-
22-
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
20+
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}.
21+
If you write applications in Scala, you will need to use a compatible Scala version
22+
(e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the
23+
right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
2324

2425
# Running the Examples and Shell
2526

@@ -50,23 +51,6 @@ options for deployment:
5051
* [Apache Mesos](running-on-mesos.html)
5152
* [Hadoop YARN](running-on-yarn.html)
5253

53-
# A Note About Hadoop Versions
54-
55-
Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
56-
storage systems. Because the HDFS protocol has changed in different versions of
57-
Hadoop, you must build Spark against the same version that your cluster uses.
58-
By default, Spark links to Hadoop 1.0.4. You can change this by setting the
59-
`SPARK_HADOOP_VERSION` variable when compiling:
60-
61-
SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly
62-
63-
In addition, if you wish to run Spark on [YARN](running-on-yarn.html), set
64-
`SPARK_YARN` to `true`:
65-
66-
SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
67-
68-
Note that on Windows, you need to set the environment variables on separate lines, e.g., `set SPARK_HADOOP_VERSION=1.2.1`.
69-
7054
# Where to Go from Here
7155

7256
**Programming guides:**

docs/quick-start.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,9 @@ title: Quick Start
99
This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's interactive Scala shell (don't worry if you don't know Scala -- you will not need much for this), then show how to write standalone applications in Scala, Java, and Python.
1010
See the [programming guide](scala-programming-guide.html) for a more complete reference.
1111

12-
To follow along with this guide, you only need to have successfully built Spark on one machine. Simply go into your Spark directory and run:
13-
14-
{% highlight bash %}
15-
$ sbt/sbt assembly
16-
{% endhighlight %}
12+
To follow along with this guide, first download a packaged release of Spark from the
13+
[Spark website](http://spark.apache.org/downloads.html). Since we won't be using HDFS,
14+
you can download a package for any version of Hadoop.
1715

1816
# Interactive Analysis with the Spark Shell
1917

0 commit comments

Comments
 (0)