You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-111Lines changed: 2 additions & 111 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This is a collaboratively maintained project working on [SPARK-18278](https://is
8
8
9
9
## Getting Started
10
10
11
-
-[Usage guide](docs/running-on-kubernetes.md) shows how to run the code
11
+
-[Usage guide](https://apache-spark-on-k8s.github.io/userdocs/) shows how to run the code
12
12
-[Development docs](resource-managers/kubernetes/README.md) shows how to get set up for development
13
13
- Code is primarily located in the [resource-managers/kubernetes](resource-managers/kubernetes) folder
14
14
@@ -30,113 +30,4 @@ This is a collaborative effort by several folks from different companies who are
30
30
- Intel
31
31
- Palantir
32
32
- Pepperdata
33
-
- Red Hat
34
-
35
-
--------------------
36
-
37
-
(original README below)
38
-
39
-
# Apache Spark
40
-
41
-
Spark is a fast and general cluster computing system for Big Data. It provides
42
-
high-level APIs in Scala, Java, Python, and R, and an optimized engine that
43
-
supports general computation graphs for data analysis. It also supports a
44
-
rich set of higher-level tools including Spark SQL for SQL and DataFrames,
45
-
MLlib for machine learning, GraphX for graph processing,
46
-
and Spark Streaming for stream processing.
47
-
48
-
<http://spark.apache.org/>
49
-
50
-
51
-
## Online Documentation
52
-
53
-
You can find the latest Spark documentation, including a programming
54
-
guide, on the [project web page](http://spark.apache.org/documentation.html).
55
-
This README file only contains basic setup instructions.
56
-
57
-
## Building Spark
58
-
59
-
Spark is built using [Apache Maven](http://maven.apache.org/).
60
-
To build Spark and its example programs, run:
61
-
62
-
build/mvn -DskipTests clean package
63
-
64
-
(You do not need to do this if you downloaded a pre-built package.)
65
-
66
-
You can build Spark using more than one thread by using the -T option with Maven, see ["Parallel builds in Maven 3"](https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3).
67
-
More detailed documentation is available from the project site, at
Copy file name to clipboardExpand all lines: docs/running-on-kubernetes.md
-2Lines changed: 0 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -149,8 +149,6 @@ environment variable in your Dockerfiles.
149
149
150
150
### Accessing Kubernetes Clusters
151
151
152
-
For details about running on public cloud environments, such as Google Container Engine (GKE), refer to [running Spark in the cloud with Kubernetes](running-on-kubernetes-cloud.md).
153
-
154
152
Spark-submit also supports submission through the
155
153
[local kubectl proxy](https://kubernetes.io/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy). One can use the
156
154
authenticating proxy to communicate with the api server directly without passing credentials to spark-submit.
0 commit comments