Skip to content

Commit cc70d3c

Browse files
committed
Fixed doc to remove createSparkJobNamespace
1 parent 356388d commit cc70d3c

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

docs/quick-start-guide.md

+5-6
Original file line numberDiff line numberDiff line change
@@ -48,13 +48,13 @@ To run the Spark Pi example, run the following command:
4848
$ kubectl apply -f examples/spark-pi.yaml
4949
```
5050

51-
Note that `spark-pi.yaml` configures the driver pod to use the `spark` service account to communicate with the Kubernetes API server. You might need to replace it with the approprate service account before submitting the job. If you installed the operator using the Helm chart, the Spark job namespace (i.e. `default` by default) already has a service account you can use. Its name ends with `-spark` and starts with the Helm release name. The Helm chart has two configuration options, `createSparkJobNamespace` which defaults to `true` and `sparkJobNamespace` which defaults to `default`. For example, If you would like to run your Spark job in a new namespace called `test-ns`, then please install the chart with the command:
51+
Note that `spark-pi.yaml` configures the driver pod to use the `spark` service account to communicate with the Kubernetes API server. You might need to replace it with the approprate service account before submitting the job. If you installed the operator using the Helm chart, the Spark job namespace (i.e. `default` by default) already has a service account you can use. Its name ends with `-spark` and starts with the Helm release name. The Helm chart has a configuration option called `sparkJobNamespace` which defaults to `default`. For example, If you would like to run your Spark job in another namespace called `test-ns`, then first make sure it already exists and then install the chart with the command:
5252

5353
```bash
54-
$ helm install incubator/sparkoperator --namespace spark-operator --set createSparkJobNamespace=true --set sparkJobNamespace=test-ns
54+
$ helm install incubator/sparkoperator --namespace spark-operator --set sparkJobNamespace=test-ns
5555
```
5656

57-
Then the chart will create the namespace `test-ns` and set up a service account for your Spark jobs to use in that namespace.
57+
Then the chart will set up a service account for your Spark jobs to use in that namespace.
5858

5959
Running the above command will create a `SparkApplication` object named `spark-pi`. Check the object by running the following command:
6060

@@ -200,8 +200,7 @@ and deleting the pods outside the operator might lead to incorrect metric values
200200
## Driver UI Access and Ingress
201201

202202
The operator, by default, makes the Spark UI accessible by creating a service of type `NodePort` which exposes the UI via the node running the driver.
203-
The operator also supports creating an Ingress for the UI. This can be turned on by setting the `ingress-url-format` command-line flag. The `ingress-url-format`
204-
should be a template like `{{$appName}}.ingress.cluster.com` and the operator will replace the `{{$appName}}` with the appropriate appName.
203+
The operator also supports creating an Ingress for the UI. This can be turned on by setting the `ingress-url-format` command-line flag. The `ingress-url-format` should be a template like `{{$appName}}.ingress.cluster.com` and the operator will replace the `{{$appName}}` with the appropriate appName.
205204

206205
The operator also sets both `WebUIAddress` which uses the Node's public IP as well as `WebUIIngressAddress` as part of the `DriverInfo` field of the `SparkApplication`.
207206

@@ -212,7 +211,7 @@ The Kubernetes Operator for "Apache Spark comes with an optional mutating admiss
212211
The webhook requires a X509 certificate for TLS for pod admission requests and responses between the Kubernetes API server and the webhook server running inside the operator. For that, the certificate and key files must be accessible by the webhook server.
213212
The Kubernetes Operator for Spark ships with a tool at `hack/gencerts.sh` for generating the CA and server certificate and putting the certificate and key files into a secret named `spark-webhook-certs` in the namespace `spark-operator`. This secret will be mounted into the operator pod.
214213

215-
Run the following command to create the secret with a certificate and key files using a Batch Job, and install the operator Deployment with the mutating admission webhook:
214+
Run the following command to create the secret with a certificate and key files using a batch Job, and install the operator Deployment with the mutating admission webhook:
216215

217216
```bash
218217
$ kubectl apply -f manifest/spark-operator-with-webhook.yaml

0 commit comments

Comments
 (0)