Skip to content

Commit 08bbd5f

Browse files
committed
Removed reference to incubation in Spark user docs.
1 parent c852201 commit 08bbd5f

9 files changed

+15
-25
lines changed

docs/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
Welcome to the Spark documentation!
22

3-
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.incubator.apache.org/documentation.html.
3+
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.
44

55
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.
66

docs/_config.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ markdown: kramdown
33

44
# These allow the documentation to be updated with nerw releases
55
# of Spark, Scala, and Mesos.
6-
SPARK_VERSION: 1.0.0-incubating-SNAPSHOT
6+
SPARK_VERSION: 1.0.0-SNAPSHOT
77
SPARK_VERSION_SHORT: 1.0.0
88
SCALA_BINARY_VERSION: "2.10"
99
SCALA_VERSION: "2.10.3"
1010
MESOS_VERSION: 0.13.0
1111
SPARK_ISSUE_TRACKER_URL: https://spark-project.atlassian.net
12-
SPARK_GITHUB_URL: https://github.com/apache/incubator-spark
12+
SPARK_GITHUB_URL: https://github.com/apache/spark

docs/_layouts/global.html

-10
Original file line numberDiff line numberDiff line change
@@ -159,16 +159,6 @@ <h2>Heading</h2>
159159
160160
<hr>-->
161161

162-
<footer>
163-
<hr>
164-
<p style="text-align: center; veritcal-align: middle; color: #999;">
165-
Apache Spark is an effort undergoing incubation at the Apache Software Foundation.
166-
<a href="http://incubator.apache.org">
167-
<img style="margin-left: 20px;" src="img/incubator-logo.png" />
168-
</a>
169-
</p>
170-
</footer>
171-
172162
</div> <!-- /container -->
173163

174164
<script src="js/vendor/jquery-1.8.0.min.js"></script>

docs/bagel-programming-guide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ _Example_
108108

109109
## Operations
110110

111-
Here are the actions and types in the Bagel API. See [Bagel.scala](https://github.com/apache/incubator-spark/blob/master/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala) for details.
111+
Here are the actions and types in the Bagel API. See [Bagel.scala](https://github.com/apache/spark/blob/master/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala) for details.
112112

113113
### Actions
114114

docs/index.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ It also supports a rich set of higher-level tools including [Shark](http://shark
99

1010
# Downloading
1111

12-
Get Spark by visiting the [downloads page](http://spark.incubator.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
12+
Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
1313

1414
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation.
1515

@@ -96,7 +96,7 @@ For this version of Spark (0.8.1) Hadoop 2.2.x (or newer) users will have to bui
9696
* [Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes
9797
* [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly without a third-party cluster manager
9898
* [Mesos](running-on-mesos.html): deploy a private cluster using
99-
[Apache Mesos](http://incubator.apache.org/mesos)
99+
[Apache Mesos](http://mesos.apache.org)
100100
* [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
101101

102102
**Other documents:**
@@ -110,20 +110,20 @@ For this version of Spark (0.8.1) Hadoop 2.2.x (or newer) users will have to bui
110110

111111
**External resources:**
112112

113-
* [Spark Homepage](http://spark.incubator.apache.org)
113+
* [Spark Homepage](http://spark.apache.org)
114114
* [Shark](http://shark.cs.berkeley.edu): Apache Hive over Spark
115-
* [Mailing Lists](http://spark.incubator.apache.org/mailing-lists.html): ask questions about Spark here
115+
* [Mailing Lists](http://spark.apache.org/mailing-lists.html): ask questions about Spark here
116116
* [AMP Camps](http://ampcamp.berkeley.edu/): a series of training camps at UC Berkeley that featured talks and
117117
exercises about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),
118118
[slides](http://ampcamp.berkeley.edu/agenda-2012) and [exercises](http://ampcamp.berkeley.edu/exercises-2012) are
119119
available online for free.
120-
* [Code Examples](http://spark.incubator.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/) of Spark
120+
* [Code Examples](http://spark.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/apache/spark/tree/master/examples/src/main/scala/) of Spark
121121
* [Paper Describing Spark](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
122122
* [Paper Describing Spark Streaming](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf)
123123

124124
# Community
125125

126-
To get help using Spark or keep up with Spark development, sign up for the [user mailing list](http://spark.incubator.apache.org/mailing-lists.html).
126+
To get help using Spark or keep up with Spark development, sign up for the [user mailing list](http://spark.apache.org/mailing-lists.html).
127127

128128
If you're in the San Francisco Bay Area, there's a regular [Spark meetup](http://www.meetup.com/spark-users/) every few weeks. Come by to meet the developers and other users.
129129

docs/java-programming-guide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ We hope to generate documentation with Java-style syntax in the future.
189189
# Where to Go from Here
190190

191191
Spark includes several sample programs using the Java API in
192-
[`examples/src/main/java`](https://github.com/apache/incubator-spark/tree/master/examples/src/main/java/org/apache/spark/examples). You can run them by passing the class name to the
192+
[`examples/src/main/java`](https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples). You can run them by passing the class name to the
193193
`bin/run-example` script included in Spark; for example:
194194

195195
./bin/run-example org.apache.spark.examples.JavaWordCount

docs/python-programming-guide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ some example applications.
157157

158158
# Where to Go from Here
159159

160-
PySpark also includes several sample programs in the [`python/examples` folder](https://github.com/apache/incubator-spark/tree/master/python/examples).
160+
PySpark also includes several sample programs in the [`python/examples` folder](https://github.com/apache/spark/tree/master/python/examples).
161161
You can run them by passing the files to `pyspark`; e.g.:
162162

163163
./bin/pyspark python/examples/wordcount.py

docs/scala-programming-guide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -365,7 +365,7 @@ res2: Int = 10
365365

366366
# Where to Go from Here
367367

368-
You can see some [example Spark programs](http://spark.incubator.apache.org/examples.html) on the Spark website.
368+
You can see some [example Spark programs](http://spark.apache.org/examples.html) on the Spark website.
369369
In addition, Spark includes several samples in `examples/src/main/scala`. Some of them have both Spark versions and local (non-parallel) versions, allowing you to see what had to be changed to make the program run on a cluster. You can run them using by passing the class name to the `bin/run-example` script included in Spark; for example:
370370

371371
./bin/run-example org.apache.spark.examples.SparkPi

docs/spark-debugger.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
layout: global
33
title: The Spark Debugger
44
---
5-
**Summary:** The Spark debugger provides replay debugging for deterministic (logic) errors in Spark programs. It's currently in development, but you can try it out in the [arthur branch](https://github.com/apache/incubator-spark/tree/arthur).
5+
**Summary:** The Spark debugger provides replay debugging for deterministic (logic) errors in Spark programs. It's currently in development, but you can try it out in the [arthur branch](https://github.com/apache/spark/tree/arthur).
66

77
## Introduction
88

@@ -19,7 +19,7 @@ For deterministic errors, debugging a Spark program is now as easy as debugging
1919

2020
## Approach
2121

22-
As your Spark program runs, the slaves report key events back to the master -- for example, RDD creations, RDD contents, and uncaught exceptions. (A full list of event types is in [EventLogging.scala](https://github.com/apache/incubator-spark/blob/arthur/core/src/main/scala/spark/EventLogging.scala).) The master logs those events, and you can load the event log into the debugger after your program is done running.
22+
As your Spark program runs, the slaves report key events back to the master -- for example, RDD creations, RDD contents, and uncaught exceptions. (A full list of event types is in [EventLogging.scala](https://github.com/apache/spark/blob/arthur/core/src/main/scala/spark/EventLogging.scala).) The master logs those events, and you can load the event log into the debugger after your program is done running.
2323

2424
_A note on nondeterminism:_ For fault recovery, Spark requires RDD transformations (for example, the function passed to `RDD.map`) to be deterministic. The Spark debugger also relies on this property, and it can also warn you if your transformation is nondeterministic. This works by checksumming the contents of each RDD and comparing the checksums from the original execution to the checksums after recomputing the RDD in the debugger.
2525

0 commit comments

Comments
 (0)