Skip to content

[branch-0.9] Fix github links in docs #1456

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/bagel-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ _Example_

## Operations

Here are the actions and types in the Bagel API. See [Bagel.scala](https://github.com/apache/spark/blob/master/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala) for details.
Here are the actions and types in the Bagel API. See [Bagel.scala](https://github.com/apache/spark/tree/branch-0.9/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala) for details.

### Actions

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Note that on Windows, you need to set the environment variables on separate line
exercises about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),
[slides](http://ampcamp.berkeley.edu/agenda-2012) and [exercises](http://ampcamp.berkeley.edu/exercises-2012) are
available online for free.
* [Code Examples](http://spark.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/apache/spark/tree/master/examples/src/main/scala/) of Spark
* [Code Examples](http://spark.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/apache/spark/tree/branch-0.9/examples/src/main/scala/) of Spark
* [Paper Describing Spark](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
* [Paper Describing Spark Streaming](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf)

Expand Down
2 changes: 1 addition & 1 deletion docs/java-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ We hope to generate documentation with Java-style syntax in the future.
# Where to Go from Here

Spark includes several sample programs using the Java API in
[`examples/src/main/java`](https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples). You can run them by passing the class name to the
[`examples/src/main/java`](https://github.com/apache/spark/tree/branch-0.9/examples/src/main/java/org/apache/spark/examples). You can run them by passing the class name to the
`bin/run-example` script included in Spark; for example:

./bin/run-example org.apache.spark.examples.JavaWordCount
Expand Down
2 changes: 1 addition & 1 deletion docs/python-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ some example applications.

# Where to Go from Here

PySpark also includes several sample programs in the [`python/examples` folder](https://github.com/apache/spark/tree/master/python/examples).
PySpark also includes several sample programs in the [`python/examples` folder](https://github.com/apache/spark/tree/branch-0.9/python/examples).
You can run them by passing the files to `pyspark`; e.g.:

./bin/pyspark python/examples/wordcount.py
Expand Down
14 changes: 7 additions & 7 deletions docs/streaming-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ ssc.awaitTermination() // Wait for the computation to terminate
{% endhighlight %}

The complete code can be found in the Spark Streaming example
[NetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/streaming/examples/NetworkWordCount.scala).
[NetworkWordCount]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/scala/org/apache/spark/streaming/examples/NetworkWordCount.scala).
<br>

</div>
Expand Down Expand Up @@ -207,7 +207,7 @@ jssc.awaitTermination(); // Wait for the computation to terminate
{% endhighlight %}

The complete code can be found in the Spark Streaming example
[JavaNetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/streaming/examples/JavaNetworkWordCount.java).
[JavaNetworkWordCount]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/java/org/apache/spark/streaming/examples/JavaNetworkWordCount.java).
<br>

</div>
Expand Down Expand Up @@ -602,7 +602,7 @@ JavaPairDStream<String, Integer> runningCounts = pairs.updateStateByKey(updateFu
The update function will be called for each word, with `newValues` having a sequence of 1's (from
the `(word, 1)` pairs) and the `runningCount` having the previous count. For the complete
Scala code, take a look at the example
[StatefulNetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/streaming/examples/StatefulNetworkWordCount.scala).
[StatefulNetworkWordCount]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/scala/org/apache/spark/streaming/examples/StatefulNetworkWordCount.scala).

<h4>Transform Operation</h4>

Expand Down Expand Up @@ -1075,7 +1075,7 @@ If the `checkpointDirectory` exists, then the context will be recreated from the
If the directory does not exist (i.e., running for the first time),
then the function `functionToCreateContext` will be called to create a new
context and set up the DStreams. See the Scala example
[RecoverableNetworkWordCount]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/streaming/examples/RecoverableNetworkWordCount.scala).
[RecoverableNetworkWordCount]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/scala/org/apache/spark/streaming/examples/RecoverableNetworkWordCount.scala).
This example appends the word counts of network data into a file.

You can also explicitly create a `StreamingContext` from the checkpoint data and start the
Expand Down Expand Up @@ -1114,7 +1114,7 @@ If the `checkpointDirectory` exists, then the context will be recreated from the
If the directory does not exist (i.e., running for the first time),
then the function `contextFactory` will be called to create a new
context and set up the DStreams. See the Scala example
[JavaRecoverableWordCount]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/streaming/examples/JavaRecoverableWordCount.scala)
[JavaRecoverableWordCount]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/scala/org/apache/spark/streaming/examples/JavaRecoverableWordCount.scala)
(note that this example is missing in the 0.9 release, so you can test it using the master branch).
This example appends the word counts of network data into a file.

Expand Down Expand Up @@ -1253,6 +1253,6 @@ and output 30 after recovery.
[ZeroMQ](api/external/zeromq/index.html#org.apache.spark.streaming.zeromq.ZeroMQUtils$), and
[MQTT](api/external/mqtt/index.html#org.apache.spark.streaming.mqtt.MQTTUtils$)

* More examples in [Scala]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/streaming/examples)
and [Java]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/streaming/examples)
* More examples in [Scala]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/scala/org/apache/spark/streaming/examples)
and [Java]({{site.SPARK_GITHUB_URL}}/tree/branch-0.9/examples/src/main/java/org/apache/spark/streaming/examples)
* [Paper](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf) describing Spark Streaming