Skip to content

Commit 4587bd6

Browse files
committed
Documentation update corresponding to Helm chart v0.1.3
1 parent 2ce8ae0 commit 4587bd6

File tree

2 files changed

+13
-11
lines changed

2 files changed

+13
-11
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,8 @@ The easiest way to install the Kubernetes Operator for Apache Spark is to use th
3838

3939
```bash
4040
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
41-
$ helm install incubator/sparkoperator
42-
```
41+
$ helm install incubator/sparkoperator --namespace spark-operator
42+
```
4343

4444
## Get Started
4545

docs/quick-start-guide.md

+11-9
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,13 @@ To install the operator, use the Helm [chart](https://github.com/helm/charts/tre
1717

1818
```bash
1919
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
20-
$ helm install incubator/sparkoperator
20+
$ helm install incubator/sparkoperator --namespace spark-operator
2121
```
2222

23-
Installing the chart will create a namespace `spark-operator`, set up RBAC for the operator to run in the namespace. It will also set up RBAC for driver pods of your Spark applications to be able to manipulate executor pods. In addition, the chart will create a Deployment named `sparkoperator` in namespace `spark-operator`. The chart by default enables a [Mutating Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) for Spark pod customization. A webhook service called `spark-webhook` and a secret storing the x509 certificate called `spark-webhook-certs` are created for that purpose. To install the operator **without** the mutating admission webhook on a Kubernetes cluster, install the chart with the flag `enableWebhook=false`:
23+
Installing the chart will create a namespace `spark-operator` if it doesn't exist, set up RBAC for the operator to run in the namespace. It will also set up RBAC for driver pods of your Spark applications to be able to manipulate executor pods. In addition, the chart will create a Deployment in the namespace `spark-operator`. The chart by default enables a [Mutating Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) for Spark pod customization. A webhook service and a secret storing the x509 certificate called `spark-webhook-certs` are created for that purpose. To install the operator **without** the mutating admission webhook on a Kubernetes cluster, install the chart with the flag `enableWebhook=false`:
2424

2525
```bash
26-
$ helm install incubator/sparkoperator --set enableWebhook=false
26+
$ helm install incubator/sparkoperator --namespace spark-operator --set enableWebhook=false
2727
```
2828

2929
Due to a [known issue](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#defining_permissions_in_a_role) in GKE, you will need to first grant yourself cluster-admin privileges before you can create custom roles and role bindings on a GKE cluster versioned 1.6 and up. Run the following command before installing the chart on GKE:
@@ -32,18 +32,18 @@ Due to a [known issue](https://cloud.google.com/kubernetes-engine/docs/how-to/ro
3232
$ kubectl create clusterrolebinding <user>-cluster-admin-binding --clusterrole=cluster-admin --user=<user>@<domain>
3333
```
3434

35-
Now you should see the operator running in the cluster by checking the status of the Deployment.
35+
Now you should see the operator running in the cluster by checking the status of the Helm release.
3636

3737
```bash
38-
$ kubectl describe deployment sparkoperator -n spark-operator
38+
$ helm status <spark-operator-release-name>
3939
```
4040

4141
### Metrics
4242

4343
The operator exposes a set of metrics via the metric endpoint to be scraped by `Prometheus`. The Helm chart by default installs the operator with the additional flag to enable metrics (`-enable-metrics=true`) as well as other annotations used by Prometheus to scrape the metric endpoint. To install the operator **without** metrics enabled, pass the appropriate flag during `helm install`:
4444

4545
```bash
46-
$ helm install incubator/sparkoperator --set enableMetrics=false
46+
$ helm install incubator/sparkoperator --namespace spark-operator --set enableMetrics=false
4747
```
4848

4949
If enabled, the operator generates the following metrics:
@@ -110,7 +110,9 @@ To run the Spark Pi example, run the following command:
110110
$ kubectl apply -f examples/spark-pi.yaml
111111
```
112112

113-
This will create a `SparkApplication` object named `spark-pi`. Check the object by running the following command:
113+
Note that `spark-pi.yaml` configures the driver pod to use the `spark` service account to communicate with the Kubernetes API server. You might need to replace it with the approprate service account before submitting the job. If you installed the operator using the Helm chart, the Spark job namespace (i.e. `default` by default) already has a service account you can use. Its name ends with `-spark` and starts with the Helm release name.
114+
115+
Running the above command will create a `SparkApplication` object named `spark-pi`. Check the object by running the following command:
114116

115117
```bash
116118
$ kubectl get sparkapplications spark-pi -o=yaml
@@ -182,14 +184,14 @@ The operator submits the Spark Pi example to run once it receives an event indic
182184
The Kubernetes Operator for "Apache Spark comes with an optional mutating admission webhook for customizing Spark driver and executor pods based on the specification in `SparkApplication` objects, e.g., mounting user-specified ConfigMaps and volumes, and setting pod affinity/anti-affinity, and adding tolerations.
183185

184186
The webhook requires a X509 certificate for TLS for pod admission requests and responses between the Kubernetes API server and the webhook server running inside the operator. For that, the certificate and key files must be accessible by the webhook server.
185-
The Spark Operator ships with a tool at `hack/gencerts.sh` for generating the CA and server certificate and putting the certificate and key files into a secret named `spark-webhook-certs` in namespace `sparkoperator`. This secret will be mounted into the Spark Operator pod.
187+
The Spark Operator ships with a tool at `hack/gencerts.sh` for generating the CA and server certificate and putting the certificate and key files into a secret named `spark-webhook-certs` in the namespace `spark-operator`. This secret will be mounted into the Spark Operator pod.
186188

187189
Run the following command to create secret with certificate and key files using Batch Job, and install the Spark Operator Deployment with the mutating admission webhook:
188190

189191
```bash
190192
$ kubectl apply -f manifest/spark-operator-with-webhook.yaml
191193
```
192194

193-
This will create a Deployment named `sparkoperator` and a Service named `spark-webhook` for the webhook in namespace `sparkoperator`.
195+
This will create a Deployment named `sparkoperator` and a Service named `spark-webhook` for the webhook in namespace `spark-operator`.
194196

195197
If the operator is installed via the Helm chart using the default settings (i.e. with webhook enabled), the above steps are all automated for you.

0 commit comments

Comments
 (0)