Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate CRD specs, bump to v1beta2 #578

Merged
merged 9 commits into from
Sep 13, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

The Kubernetes Operator for Apache Spark is under active development, but backward compatibility of the APIs is guaranteed for beta releases.

**If you are currently using the `v1alpha1` version of the APIs in your manifests, please update them to use the `v1beta1` version by changing `apiVersion: "sparkoperator.k8s.io/v1alpha1"` to `apiVersion: "sparkoperator.k8s.io/v1beta1"`. You will also need to delete the `v1alpha1` version of the CustomResourceDefinitions named `sparkapplications.sparkoperator.k8s.io` and `scheduledsparkapplications.sparkoperator.k8s.io`, and replace them with the `v1beta1` version either by installing the latest version of the operator or by running `kubectl create -f manifest/spark-operator-crds.yaml`.**
**If you are currently using the `v1alpha1` or `v1beta1` version of the APIs in your manifests, please update them to use the `v1beta2` version by changing `apiVersion: "sparkoperator.k8s.io/<version>"` to `apiVersion: "sparkoperator.k8s.io/v1beta2"`. You will also need to delete the `previous` version of the CustomResourceDefinitions named `sparkapplications.sparkoperator.k8s.io` and `scheduledsparkapplications.sparkoperator.k8s.io`, and replace them with the `v1beta2` version either by installing the latest version of the operator or by running `kubectl create -f manifest/crds`.**

Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is currently experimental and implemented using a Kubernetes
[Mutating Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), which became beta in Kubernetes 1.9.
Expand Down
2 changes: 1 addition & 1 deletion docs/api.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# SparkApplication API

The Kubernetes Operator for Apache Spark uses [CustomResourceDefinitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) named `SparkApplication` and `ScheduledSparkApplication` for specifying one-time Spark applications and Spark applications
that are supposed to run on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule. Similarly to other kinds of Kubernetes resources, they consist of a specification in a `Spec` field and a `Status` field. The definitions are organized in the following structure. The v1beta1 version of the API definition is implemented [here](../pkg/apis/sparkoperator.k8s.io/v1beta1/types.go).
that are supposed to run on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule. Similarly to other kinds of Kubernetes resources, they consist of a specification in a `Spec` field and a `Status` field. The definitions are organized in the following structure. The v1beta2 version of the API definition is implemented [here](../pkg/apis/sparkoperator.k8s.io/v1beta2/types.go).

```
ScheduledSparkApplication
Expand Down
7 changes: 7 additions & 0 deletions docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ Before building the operator the first time, run the following commands to get t
$ go get -u k8s.io/code-generator/cmd/client-gen
$ go get -u k8s.io/code-generator/cmd/deepcopy-gen
$ go get -u k8s.io/code-generator/cmd/defaulter-gen
$ go get -u sigs.k8s.io/controller-tools/cmd/contoller-gen
```

To update the auto-generated code, run the following command. (This step is only required if the CRD types have been changed):
Expand All @@ -59,6 +60,12 @@ To update the auto-generated code, run the following command. (This step is only
$ go generate
```

To update the auto-generated CRD definitions, run the following command:

```bash
$ controller-gen crd:trivialVersions=true,maxDescLen=0 paths="./pkg/apis/sparkoperator.k8s.io/v1beta2" output:crd:artifacts:config=./manifest/crds/
```

You can verify the current auto-generated code is up to date with:

```bash
Expand Down
4 changes: 2 additions & 2 deletions docs/gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The ones set in `core-site.xml` apply to all applications using the image. Also
variable `GCS_PROJECT_ID` must be set when using the image at `gcr.io/ynli-k8s/spark:v2.3.0-gcs`.

```yaml
apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: foo-gcs-bg
Expand All @@ -58,7 +58,7 @@ spec:
"google.cloud.auth.service.account.enable": "true"
"google.cloud.auth.service.account.json.keyfile": "/mnt/secrets/key.json"
driver:
cores: 0.1
cores: 1
secrets:
- name: "gcs-bq"
path: "/mnt/secrets"
Expand Down
6 changes: 3 additions & 3 deletions docs/quick-start-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,15 +69,15 @@ $ kubectl get sparkapplications spark-pi -o=yaml
This will show something similar to the following:

```yaml
apiVersion: sparkoperator.k8s.io/v1beta1
apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
...
spec:
deps: {}
driver:
coreLimit: 200m
cores: 0.1
coreLimit: 1200m
cores: 1
labels:
version: 2.3.0
memory: 512m
Expand Down
8 changes: 4 additions & 4 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ It also has fields for specifying the unified container image (to use for both t
Below is an example showing part of a `SparkApplication` specification:

```yaml
apiVersion: sparkoperator.k8s.io/v1beta1
apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
name: spark-pi
Expand Down Expand Up @@ -125,7 +125,7 @@ The following is an example driver specification:
```yaml
spec:
driver:
cores: 0.1
cores: 1
coreLimit: 200m
memory: 512m
labels:
Expand Down Expand Up @@ -514,7 +514,7 @@ client so effectively the driver gets restarted.
The operator supports running a Spark application on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule using objects of the `ScheduledSparkApplication` custom resource type. A `ScheduledSparkApplication` object specifies a cron schedule on which the application should run and a `SparkApplication` template from which a `SparkApplication` object for each run of the application is created. The following is an example `ScheduledSparkApplication`:

```yaml
apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: ScheduledSparkApplication
metadata:
name: spark-pi-scheduled
Expand All @@ -531,7 +531,7 @@ spec:
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
driver:
cores: 0.5
cores: 1
memory: 512m
executor:
cores: 1
Expand Down
6 changes: 3 additions & 3 deletions docs/volcano-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ $ helm install incubator/sparkoperator --namespace spark-operator --set enableBa

Now, we can run a updated version of spark application (with `batchScheduler` configured), for instance:
```yaml
apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -45,8 +45,8 @@ spec:
path: "/tmp"
type: Directory
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi-configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -33,8 +33,8 @@ spec:
configMap:
name: dummy-cm
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi-prometheus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -32,8 +32,8 @@ spec:
restartPolicy:
type: Never
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi-schedule.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: ScheduledSparkApplication
metadata:
name: spark-pi-scheduled
Expand All @@ -32,8 +32,8 @@ spec:
restartPolicy:
type: Never
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
Expand All @@ -34,8 +34,8 @@ spec:
path: "/tmp"
type: Directory
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
6 changes: 3 additions & 3 deletions examples/spark-py-pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# Support for Python is experimental, and requires building SNAPSHOT image of Apache Spark,
# with `imagePullPolicy` set to Always

apiVersion: "sparkoperator.k8s.io/v1beta1"
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-pi
Expand All @@ -36,8 +36,8 @@ spec:
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 0.1
coreLimit: "200m"
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 2.4.0
Expand Down
2 changes: 1 addition & 1 deletion hack/update-codegen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${SCRIPT_ROOT}; ls -d -1 ./vendor/k8s.io/code-ge
# instead of the $GOPATH directly. For normal projects this can be dropped.
${CODEGEN_PKG}/generate-groups.sh "all" \
github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/client github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis \
sparkoperator.k8s.io:v1alpha1,v1beta1 \
sparkoperator.k8s.io:v1alpha1,v1beta1,v1beta2 \
--go-header-file "$(dirname ${BASH_SOURCE})/custom-boilerplate.go.txt" \
--output-base "$(dirname ${BASH_SOURCE})/../../../.."

Expand Down
8 changes: 0 additions & 8 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@ import (
operatorConfig "github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/config"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/controller/scheduledsparkapplication"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/controller/sparkapplication"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/crd"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/util"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/webhook"
)
Expand Down Expand Up @@ -155,13 +154,6 @@ func main() {
batchSchedulerMgr = batchscheduler.NewSchedulerManager(config)
}

if *installCRDs {
err = crd.CreateOrUpdateCRDs(apiExtensionsClient)
if err != nil {
glog.Fatal(err)
}
}

crInformerFactory := buildCustomResourceInformerFactory(crClient)
podInformerFactory := buildPodInformerFactory(kubeClient)

Expand Down
Loading