@@ -26,38 +26,40 @@ The main useful options are outlined below.
26
26
27
27
## Using a different backend
28
28
29
- The integration test backend i.e. the K8S cluster used for testing is controlled by the ` --deploy-mode ` option. By default this
30
- is set to ` minikube ` , the available backends are their perquisites are as follows.
29
+ The integration test backend i.e. the K8S cluster used for testing is controlled by the ` --deploy-mode ` option. By
30
+ default this is set to ` minikube ` , the available backends are their perequisites are as follows.
31
31
32
32
### ` minikube `
33
33
34
- Uses the local ` minikube ` cluster, this requires that ` minikube ` 0.23.0 or greater be installed and that it be allocated at least
35
- 4 CPUs and 6GB memory (some users have reported success with as few as 3 CPUs and 4GB memory). The tests will check if ` minikube `
36
- is started and abort early if it isn't currently running.
34
+ Uses the local ` minikube ` cluster, this requires that ` minikube ` 0.23.0 or greater be installed and that it be allocated
35
+ at least 4 CPUs and 6GB memory (some users have reported success with as few as 3 CPUs and 4GB memory). The tests will
36
+ check if ` minikube ` is started and abort early if it isn't currently running.
37
37
38
38
### ` docker-for-desktop `
39
39
40
40
Since July 2018 Docker for Desktop provide an optional Kubernetes cluster that can be enabled as described in this
41
- [ blog post] ( https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/ ) . Assuming this is enabled
42
- using this backend will auto-configure itself from the ` docker-for-desktop ` context that Docker creates in your ` ~/.kube/config ` file.
43
- If your config file is in a different location you should set the ` KUBECONFIG ` environment variable appropriately.
41
+ [ blog post] ( https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/ ) . Assuming
42
+ this is enabled using this backend will auto-configure itself from the ` docker-for-desktop ` context that Docker creates
43
+ in your ` ~/.kube/config ` file. If your config file is in a different location you should set the ` KUBECONFIG `
44
+ environment variable appropriately.
44
45
45
- ### ` cloud ` and ` cloud-url `
46
+ ### ` cloud `
46
47
47
- These closely related backends configure the tests to use an arbitrary Kubernetes cluster running in the cloud or otherwise.
48
+ These cloud backend configures the tests to use an arbitrary Kubernetes cluster running in the cloud or otherwise.
48
49
49
- The ` cloud ` backend auto-configures the cluster to use from your K8S config file, this is assumed to be ` ~/.kube/config ` unless the
50
- ` KUBECONFIG ` environment variable is set to override this location. By default this will use whatever your current context is in the
51
- config file, to use an alternative context from your config file you can specify the ` --context <context> ` flag with the desired context.
50
+ The ` cloud ` backend auto-configures the cluster to use from your K8S config file, this is assumed to be ` ~/.kube/config `
51
+ unless the ` KUBECONFIG ` environment variable is set to override this location. By default this will use whatever your
52
+ current context is in the config file, to use an alternative context from your config file you can specify the
53
+ ` --context <context> ` flag with the desired context.
52
54
53
- The ` cloud-url ` backend configures the cluster to simply use a K8S master URL , this should be supplied via the
54
- ` --spark-master <master-url> ` flag.
55
+ You can optionally use a different K8S master URL than the one your K8S config file specified , this should be supplied
56
+ via the ` --spark-master <master-url> ` flag.
55
57
56
58
## Re-using Docker Images
57
59
58
60
By default, the test framework will build new Docker images on every test execution. A unique image tag is generated,
59
- and it is written to file at ` target/imageTag.txt ` . To reuse the images built in a previous run, or to use a Docker image tag
60
- that you have built by other means already, pass the tag to the test script:
61
+ and it is written to file at ` target/imageTag.txt ` . To reuse the images built in a previous run, or to use a Docker
62
+ image tag that you have built by other means already, pass the tag to the test script:
61
63
62
64
dev/dev-run-integration-tests.sh --image-tag <tag>
63
65
@@ -67,34 +69,40 @@ where if you still want to use images that were built before by the test framewo
67
69
68
70
## Spark Distribution Under Test
69
71
70
- The Spark code to test is handed to the integration test system via a tarball. Here is the option that is used to specify the tarball:
72
+ The Spark code to test is handed to the integration test system via a tarball. Here is the option that is used to
73
+ specify the tarball:
71
74
72
75
* ` --spark-tgz <path-to-tgz> ` - set ` <path-to-tgz> ` to point to a tarball containing the Spark distribution to test.
73
76
74
- This Tarball should be created by first running ` dev/make-distribution.sh ` passing the ` --tgz ` flag and ` -Pkubernetes ` as one of the
75
- options to ensure that Kubernetes support is included in the distribution. For more details on building a runnable distribution please
76
- see the [ Building Spark] ( https://spark.apache.org/docs/latest/building-spark.html#building-a-runnable-distribution ) documentation.
77
+ This Tarball should be created by first running ` dev/make-distribution.sh ` passing the ` --tgz ` flag and ` -Pkubernetes `
78
+ as one of the options to ensure that Kubernetes support is included in the distribution. For more details on building a
79
+ runnable distribution please see the
80
+ [ Building Spark] ( https://spark.apache.org/docs/latest/building-spark.html#building-a-runnable-distribution )
81
+ documentation.
77
82
78
- ** TODO:** Don't require the packaging of the built Spark artifacts into this tarball, just read them out of the current tree.
83
+ ** TODO:** Don't require the packaging of the built Spark artifacts into this tarball, just read them out of the current
84
+ tree.
79
85
80
86
## Customizing the Namespace and Service Account
81
87
82
- If no namespace is specified then a temporary namespace will be created and deleted during the test run. Similarly if no service
83
- account is specified then the ` default ` service account for the namespace will be used.
88
+ If no namespace is specified then a temporary namespace will be created and deleted during the test run. Similarly if
89
+ no service account is specified then the ` default ` service account for the namespace will be used.
84
90
85
- Using the ` --namespace <namespace> ` flag sets ` <namespace> ` to the namespace in which the tests should be run. If this is supplied
86
- then the tests assume this namespace exists in the K8S cluster and will not attempt to create it. Additionally this namespace must
87
- have an appropriately authorized service account which can be customised via the ` --service-account ` flag.
91
+ Using the ` --namespace <namespace> ` flag sets ` <namespace> ` to the namespace in which the tests should be run. If this
92
+ is supplied then the tests assume this namespace exists in the K8S cluster and will not attempt to create it.
93
+ Additionally this namespace must have an appropriately authorized service account which can be customised via the
94
+ ` --service-account ` flag.
88
95
89
- The ` --service-account <service account name> ` flag sets ` <service account name> ` to the name of the Kubernetes service account to
90
- use in the namespace specified by the ` --namespace ` flag. The service account is expected to have permissions to get, list, watch,
91
- and create pods. For clusters with RBAC turned on, it's important that the right permissions are granted to the service account
92
- in the namespace through an appropriate role and role binding. A reference RBAC configuration is provided in ` dev/spark-rbac.yaml ` .
96
+ The ` --service-account <service account name> ` flag sets ` <service account name> ` to the name of the Kubernetes service
97
+ account to use in the namespace specified by the ` --namespace ` flag. The service account is expected to have permissions
98
+ to get, list, watch, and create pods. For clusters with RBAC turned on, it's important that the right permissions are
99
+ granted to the service account in the namespace through an appropriate role and role binding. A reference RBAC
100
+ configuration is provided in ` dev/spark-rbac.yaml ` .
93
101
94
102
# Running the Test Directly
95
103
96
- If you prefer to run just the integration tests directly then you can customise the behaviour via properties passed to Maven using the
97
- ` -Dproperty=value ` option e.g.
104
+ If you prefer to run just the integration tests directly, then you can customise the behaviour via passing system
105
+ properties to Maven. For example:
98
106
99
107
mvn integration-test -am -pl :spark-kubernetes-integration-tests_2.11 \
100
108
-Pkubernetes -Phadoop-2.7 -Dhadoop.version=2.7.3 \
@@ -108,8 +116,8 @@ If you prefer to run just the integration tests directly then you can customise
108
116
109
117
## Available Maven Properties
110
118
111
- The following are the available Maven properties that can be passed. For the most part these correspond to flags passed to the
112
- wrapper scripts and using the wrapper scripts will simply set these appropriately behind the scenes.
119
+ The following are the available Maven properties that can be passed. For the most part these correspond to flags passed
120
+ to the wrapper scripts and using the wrapper scripts will simply set these appropriately behind the scenes.
113
121
114
122
<table >
115
123
<tr >
@@ -134,63 +142,65 @@ wrapper scripts and using the wrapper scripts will simply set these appropriatel
134
142
<tr >
135
143
<td><code>spark.kubernetes.test.deployMode</code></td>
136
144
<td>
137
- The integration test backend to use. Acceptable values are <code>minikube</code>, <code>docker-for-desktop</code>,
138
- <code>cloud </code> and <code>cloud-url </code>.
145
+ The integration test backend to use. Acceptable values are <code>minikube</code>,
146
+ <code>docker-for-desktop </code> and <code>cloud</code>.
139
147
<td><code>minikube</code></td>
140
148
</tr >
141
149
<tr >
142
150
<td><code>spark.kubernetes.test.kubeConfigContext</code></td>
143
151
<td>
144
- When using the <code>cloud</code> backend specifies the context from the users K8S config file that should be used as the
145
- target cluster for integration testing. If not set and using the <code>cloud</code> backend then your current context
146
- will be used.
152
+ When using the <code>cloud</code> backend specifies the context from the users K8S config file that should be used
153
+ as the target cluster for integration testing. If not set and using the <code>cloud</code> backend then your
154
+ current context will be used.
147
155
</td>
148
156
<td></td>
149
157
</tr >
150
158
<tr >
151
159
<td><code>spark.kubernetes.test.master</code></td>
152
160
<td>
153
- When using the <code>cloud-url</code> backend must be specified to indicate the K8S master URL to communicate with.
161
+ When using the <code>cloud-url</code> backend must be specified to indicate the K8S master URL to communicate
162
+ with.
154
163
</td>
155
164
<td></td>
156
165
</tr >
157
166
<tr >
158
167
<td><code>spark.kubernetes.test.imageTag</code></td>
159
168
<td>
160
- A specific image tag to use, when set assumes images with those tags are already built and available in the specified image
161
- repository. When set to <code>N/A</code> (the default) fresh images will be built.
169
+ A specific image tag to use, when set assumes images with those tags are already built and available in the
170
+ specified image repository. When set to <code>N/A</code> (the default) fresh images will be built.
162
171
</td>
163
172
<td><code>N/A</code>
164
173
</tr >
165
174
<tr >
166
175
<td><code>spark.kubernetes.test.imageTagFile</code></td>
167
176
<td>
168
- A file containing the image tag to use, if no specific image tag is set then fresh images will be built with a generated
169
- tag and that tag written to this file.
177
+ A file containing the image tag to use, if no specific image tag is set then fresh images will be built with a
178
+ generated tag and that tag written to this file.
170
179
</td>
171
180
<td><code>${project.build.directory}/imageTag.txt</code></td>
172
181
</tr >
173
182
<tr >
174
183
<td><code>spark.kubernetes.test.imageRepo</code></td>
175
184
<td>
176
- The Docker image repository that contains the images to be used if a specific image tag is set or to which the images will
177
- be pushed to if fresh images are being built.
185
+ The Docker image repository that contains the images to be used if a specific image tag is set or to which the
186
+ images will be pushed to if fresh images are being built.
178
187
</td>
179
188
<td><code>docker.io/kubespark</code></td>
180
189
</tr >
181
190
<tr >
182
191
<td><code>spark.kubernetes.test.namespace</code></td>
183
192
<td>
184
- A specific Kubernetes namespace to run the tests in. If specified then the tests assume that this namespace already exists.
185
- When not specified a temporary namespace for the tests will be created and deleted as part of the test run.
193
+ A specific Kubernetes namespace to run the tests in. If specified then the tests assume that this namespace
194
+ already exists. When not specified a temporary namespace for the tests will be created and deleted as part of the
195
+ test run.
186
196
</td>
187
197
<td></td>
188
198
</tr >
189
199
<tr >
190
200
<td><code>spark.kubernetes.test.serviceAccountName</code></td>
191
201
<td>
192
- A specific Kubernetes service account to use for running the tests. If not specified then the namespaces default service
193
- account will be used and that must have sufficient permissions or the tests will fail.
202
+ A specific Kubernetes service account to use for running the tests. If not specified then the namespaces default
203
+ service account will be used and that must have sufficient permissions or the tests will fail.
194
204
</td>
195
205
<td></td>
196
206
</tr >
0 commit comments