Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Knative Kafka w/o upstream e2e tests and w/o smoke tests #587

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Enable KnativeKafka
  • Loading branch information
aliok committed Oct 21, 2020
commit 54d28ef91e8cab8006d2aec63b30365a269b5bee
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,13 @@ test-unit:
go test ./knative-operator/...
go test ./serving/ingress/...

# Run only E2E tests from the current repo.
# Run only SERVING/EVENTING E2E tests from the current repo.
test-e2e:
./test/e2e-tests.sh

# TODO: that will - soon... run the e2e tests with Kafka
# Run E2E tests from the current repo for serving+eventing+knativeKafka
test-e2e-with-kafka:
./test/e2e-tests.sh
$INSTALL_KAFKA=true $TEST_KNATIVE_KAFKA=true ./test/e2e-tests.sh

# Run both unit and E2E tests from the current repo.
test-operator: test-unit test-e2e
Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ pushed to your docker repository.

Use the appropriate make targets or scripts in `hack`:

- `make dev`: Deploys the serverless-operator without deploying Knative Serving and Eventing.
- `make dev`: Deploys the serverless-operator without deploying Knative Serving, Eventing and Kafka components.
- `make install`: Scales the cluster appropriately, deploys serverless-operator, Knative Serving and Eventing.
- `make install-serving`: Scales the cluster appropriately, deploys serverless-operator and Knative Serving.
- `make install-eventing`: Scales the cluster appropriately, deploys serverless-operator and Knative Eventing.
Expand All @@ -74,7 +74,8 @@ make release-files
#### serverless-operator tests

- `make test-unit`: Runs unit tests.
- `make test-e2e`: Scales, installs and runs E2E tests.
- `make test-e2e`: Scales, installs and runs E2E tests (except for Knative Kafka components).
- `make test-e2e-with-kafka`: Scales, installs and runs E2E tests (also tests Knative Kafka components).
- `make install-mesh test-e2e`: Scales, installs and runs E2E tests, including ServiceMesh integration tests
- `make test-operator`: Runs unit and E2E tests.

Expand Down
1 change: 1 addition & 0 deletions hack/generate/dockerfile.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ values[DEFAULT_CHANNEL]="$(metadata.get olm.channels.default)"
values[VERSION]="$(metadata.get project.version)"
values[SERVING_VERSION]="$(metadata.get dependencies.serving)"
values[EVENTING_VERSION]="$(metadata.get dependencies.eventing)"
values[EVENTING_CONTRIB_VERSION]="$(metadata.get dependencies.eventing_contrib)"
values[GOLANG_VERSION]="$(metadata.get requirements.golang)"
values[OCP_TARGET_VLIST]="$(metadata.get 'requirements.ocp.*' | sed 's/^/v/' | paste -sd ',' -)"

Expand Down
2 changes: 1 addition & 1 deletion hack/lib/__sources__.bash
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/usr/bin/env bash

declare -a __sources=(metadata vars common ui scaleup namespaces catalogsource serverless tracing mesh)
declare -a __sources=(metadata vars common ui scaleup namespaces catalogsource serverless tracing mesh strimzi)

for source in "${__sources[@]}"; do
# shellcheck disable=SC1091,SC1090
Expand Down
40 changes: 37 additions & 3 deletions hack/lib/serverless.bash
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ function ensure_serverless_installed {
logger.info 'Check if Serverless is installed'
local prev=${1:-false}
if oc get knativeserving.operator.knative.dev knative-serving -n "${SERVING_NAMESPACE}" >/dev/null 2>&1 && \
oc get knativeeventing.operator.knative.dev knative-eventing -n "${EVENTING_NAMESPACE}" >/dev/null 2>&1
oc get knativeeventing.operator.knative.dev knative-eventing -n "${EVENTING_NAMESPACE}" >/dev/null 2>&1 && \
oc get knativekafka.operator.serverless.openshift.io knative-kafka -n "${EVENTING_NAMESPACE}" >/dev/null 2>&1
then
logger.success 'Serverless is already installed.'
return 0
Expand Down Expand Up @@ -50,6 +51,9 @@ function install_serverless_latest {
if [[ $INSTALL_EVENTING == "true" ]]; then
deploy_knativeeventing_cr || return $?
fi
if [[ $INSTALL_KAFKA == "true" ]]; then
deploy_knativekafka_cr || return $?
fi
}

function deploy_serverless_operator_latest {
Expand Down Expand Up @@ -156,6 +160,32 @@ EOF
logger.success 'Knative Eventing has been installed successfully.'
}

function deploy_knativekafka_cr {
logger.info 'Deploy Knative Kafka'

# Wait for the CRD to appear
timeout 900 "[[ \$(oc get crd | grep -c knativekafkas.operator.serverless.openshift.io) -eq 0 ]]" || return 6

# Install Knative Kafka
cat <<EOF | oc apply -f - || return $?
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
name: knative-kafka
namespace: ${EVENTING_NAMESPACE}
spec:
source:
enabled: true
channel:
enabled: true
bootstrapServers: my-cluster-kafka-bootstrap.kafka:9092
EOF

timeout 900 '[[ $(oc get knativekafkas.operator.serverless.openshift.io knative-kafka -n $EVENTING_NAMESPACE -o=jsonpath="{.status.conditions[?(@.type==\"Ready\")].status}") != True ]]' || return 7

logger.success 'Knative Kafka has been installed sucessfully.'
}

function teardown_serverless {
logger.warn '😭 Teardown Serverless...'

Expand All @@ -170,8 +200,12 @@ function teardown_serverless {
logger.info 'Removing KnativeEventing CR'
oc delete knativeeventing.operator.knative.dev knative-eventing -n "${EVENTING_NAMESPACE}" || return $?
fi
logger.info 'Ensure no knative eventing pods running'
timeout 600 "[[ \$(oc get pods -n ${EVENTING_NAMESPACE} --field-selector=status.phase!=Succeeded -o jsonpath='{.items}') != '[]' ]]" || return 9
if oc get knativekafkas.operator.serverless.openshift.io knative-kafka -n "${EVENTING_NAMESPACE}" >/dev/null 2>&1; then
logger.info 'Removing KnativeKafka CR'
oc delete knativekafka.operator.serverless.openshift.io knative-kafka -n "${EVENTING_NAMESPACE}" || return $?
fi
logger.info 'Ensure no knative eventing or knative kafka pods running'
timeout 600 "[[ \$(oc get pods -n ${EVENTING_NAMESPACE} --field-selector=status.phase!=Succeeded -o jsonpath='{.items}') != '[]' ]]" || return 10

oc delete subscriptions.operators.coreos.com -n "${OPERATORS_NAMESPACE}" "${OPERATOR}" 2>/dev/null
for ip in $(oc get installplan -n "${OPERATORS_NAMESPACE}" | grep serverless-operator | cut -f1 -d' '); do
Expand Down
36 changes: 36 additions & 0 deletions hack/lib/strimzi.bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#!/usr/bin/env bash

function install_strimzi {
strimzi_version=`curl https://github.com/strimzi/strimzi-kafka-operator/releases/latest | awk -F 'tag/' '{print $2}' | awk -F '"' '{print $1}' 2>/dev/null`
header "Strimzi install"
oc create namespace kafka
oc -n kafka apply --selector strimzi.io/crd-install=true -f "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${strimzi_version}/strimzi-cluster-operator-${strimzi_version}.yaml"
curl -L "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${strimzi_version}/strimzi-cluster-operator-${strimzi_version}.yaml" \
| sed 's/namespace: .*/namespace: kafka/' \
| oc -n kafka apply -f -

header "Applying Strimzi Cluster file"
oc -n kafka apply -f "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/${strimzi_version}/examples/kafka/kafka-persistent.yaml"

header "Waiting for Strimzi to become ready"
oc wait deployment --all --timeout=-1s --for=condition=Available -n kafka
}

function uninstall_strimzi {
strimzi_version=`curl https://github.com/strimzi/strimzi-kafka-operator/releases/latest | awk -F 'tag/' '{print $2}' | awk -F '"' '{print $1}' 2>/dev/null`

header "Deleting Kafka instance"
oc -n kafka delete -f "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/${strimzi_version}/examples/kafka/kafka-persistent.yaml"

header "Waiting for Kafka to get deleted"
timeout 600 "[[ \$(oc get kafkas -n kafka -o jsonpath='{.items}') != '[]' ]]" || return 2

header "Deleting Strimzi Cluster file"
curl -L "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${strimzi_version}/strimzi-cluster-operator-${strimzi_version}.yaml" \
| sed 's/namespace: .*/namespace: kafka/' \
| oc -n kafka delete -f -

oc -n kafka delete --selector strimzi.io/crd-install=true -f "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${strimzi_version}/strimzi-cluster-operator-${strimzi_version}.yaml"

oc delete namespace kafka
}
4 changes: 2 additions & 2 deletions hack/lib/tracing.bash
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ spec:
memory: 1000Mi
requests:
memory: 256Mi
---
---
EOF

logger.info "Waiting until Zipkin is available"
timeout 600 "[[ \$(oc get pods -n ${ZIPKIN_NAMESPACE} --field-selector=status.phase!=Succeeded -o jsonpath='{.items}') != '[]' ]]" || return 1
kubectl wait deployment --all --timeout=600s --for=condition=Available -n ${ZIPKIN_NAMESPACE} || return 1
}

function enable_eventing_tracing {
Expand Down
9 changes: 7 additions & 2 deletions hack/lib/vars.bash
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ source "$(dirname "${BASH_SOURCE[0]}")/../../test/vendor/knative.dev/test-infra/
# Adjust these when upgrading the knative versions.
export KNATIVE_SERVING_VERSION="${KNATIVE_SERVING_VERSION:-v$(metadata.get dependencies.serving)}"
export KNATIVE_EVENTING_VERSION="${KNATIVE_EVENTING_VERSION:-v$(metadata.get dependencies.eventing)}"
export KNATIVE_EVENTING_CONTRIB_VERSION="${KNATIVE_EVENTING_CONTRIB_VERSION:-v$(metadata.get dependencies.eventing_contrib)}"

CURRENT_CSV="$(metadata.get project.name).v$(metadata.get project.version)"
PREVIOUS_CSV="$(metadata.get project.name).v$(metadata.get olm.replaces)"
Expand All @@ -22,6 +23,7 @@ export CURRENT_CSV PREVIOUS_CSV
# Directories below are filled with source code by ci-operator
export KNATIVE_SERVING_HOME="${GOPATH}/src/knative.dev/serving"
export KNATIVE_EVENTING_HOME="${GOPATH}/src/knative.dev/eventing"
export KNATIVE_EVENTING_CONTRIB_HOME="${GOPATH}/src/knative.dev/eventing-contrib"

export CATALOG_SOURCE_FILENAME="${CATALOG_SOURCE_FILENAME:-catalogsource-ci.yaml}"
export DOCKER_REPO_OVERRIDE="${DOCKER_REPO_OVERRIDE:-}"
Expand All @@ -36,6 +38,7 @@ export OPERATORS_NAMESPACE="${OPERATORS_NAMESPACE:-openshift-serverless}"
export SERVERLESS_NAMESPACE="${SERVERLESS_NAMESPACE:-serverless}"
export SERVING_NAMESPACE="${SERVING_NAMESPACE:-knative-serving}"
export EVENTING_NAMESPACE="${EVENTING_NAMESPACE:-knative-eventing}"
export EVENTING_SOURCES_NAMESPACE="${EVENTING_SOURCES_NAMESPACE:-knative-sources}"
# eventing e2e and conformance tests use a container for tracing tests that has hardcoded `istio-system` in it
export ZIPKIN_NAMESPACE="${ZIPKIN_NAMESPACE:-istio-system}"

Expand All @@ -57,7 +60,9 @@ export OLM_UPGRADE_CHANNEL="${OLM_UPGRADE_CHANNEL:-"$OLM_CHANNEL"}"
export OLM_SOURCE="${OLM_SOURCE:-"$OPERATOR"}"
export TEST_KNATIVE_UPGRADE="${TEST_KNATIVE_UPGRADE:-true}"
export TEST_KNATIVE_E2E="${TEST_KNATIVE_E2E:-true}"
export TEST_KNATIVE_KAFKA="${TEST_KNATIVE_KAFKA:-false}"

# Makefile triggers for modular install
export INSTALL_SERVING="${INSTALL_SERVING:-"true"}"
export INSTALL_EVENTING="${INSTALL_EVENTING:-"true"}"
export INSTALL_SERVING="${INSTALL_SERVING:-true}"
export INSTALL_EVENTING="${INSTALL_EVENTING:-true}"
export INSTALL_KAFKA="${INSTALL_KAFKA:-false}"
16 changes: 16 additions & 0 deletions hack/strimzi.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/usr/bin/env bash

# This script can be used to install Strimzi and create a Kafka instance on cluster.
#
# shellcheck disable=SC1091,SC1090
source "$(dirname "${BASH_SOURCE[0]}")/lib/__sources__.bash"

set -Eeuo pipefail

if [[ $UNINSTALL_STRIMZI == "true" ]]; then
uninstall_strimzi || exit 1
else
install_strimzi || exit 2
fi

exit 0
Original file line number Diff line number Diff line change
Expand Up @@ -107,9 +107,6 @@ spec:
was last processed by the controller.
format: int64
type: integer
version:
description: The version of the installed release
type: string
type: object
version: v1alpha1
versions:
Expand Down
9 changes: 4 additions & 5 deletions knative-operator/pkg/controller/add_knativekafka.go
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
package controller

//import (
// "github.com/openshift-knative/serverless-operator/knative-operator/pkg/controller/knativekafka"
//)
import (
"github.com/openshift-knative/serverless-operator/knative-operator/pkg/controller/knativekafka"
)

func init() {
// AddToManagerFuncs is a list of functions to create controllers and add them to a manager.
// TODO: temp change to disable KnativeKafka reconciliation
// AddToManagerFuncs = append(AddToManagerFuncs, knativekafka.Add)
AddToManagerFuncs = append(AddToManagerFuncs, knativekafka.Add)
}
15 changes: 7 additions & 8 deletions knative-operator/pkg/webhook/add_knativekafka.go
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
package webhook

//
//import (
// kk "github.com/openshift-knative/serverless-operator/knative-operator/pkg/webhook/knativekafka"
//)
//
//func init() {
// AddToManagerFuncs = append(AddToManagerFuncs, kk.ValidatingWebhook)
//}
import (
kk "github.com/openshift-knative/serverless-operator/knative-operator/pkg/webhook/knativekafka"
)

func init() {
AddToManagerFuncs = append(AddToManagerFuncs, kk.ValidatingWebhook)
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
apiVersion: apiextensions.k8s.io/v1beta1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be the v1 based CRD?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on v1 CRD.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have moved my "old" PR for CRD_v1 now to Ali's brach:

aliok#5

See the old reference for the size of the DIFF - I've now closed my old CRD_v1 PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gonna merge that after CI reports success/fail here

Don't want to interrupt it. I want to see the results

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do this separately. I had trouble doing this. I received strange OLM errors. Perhaps unrelated.
Doing it in a separate PR makes more sense anyway since we already have the CRD with v1beta1 in master and that change is not in this PR's scope

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did my PR for your branch not work @aliok ?

But I can do it later as separate PR 🤷🏻‍♂️

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, let's do that separately once we merge this PR.

kind: CustomResourceDefinition
metadata:
name: knativekafkas.operator.serverless.openshift.io
spec:
group: operator.serverless.openshift.io
names:
kind: KnativeKafka
listKind: KnativeKafkaList
plural: knativekafkas
singular: knativekafka
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: KnativeKafka is the Schema for the knativekafkas API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
channel:
description: Allows configuration for KafkaChannel installation
properties:
bootstrapServers:
description: BootstrapServers is comma separated string of bootstrapservers
that the KafkaChannels will use
type: string
enabled:
description: Enabled defines if the KafkaChannel installation is
enabled
type: boolean
required:
- enabled
type: object
source:
description: Allows configuration for KafkaSource installation
properties:
enabled:
description: Enabled defines if the KafkaSource installation is
enabled
type: boolean
required:
- enabled
type: object
required:
- channel
- source
type: object
status:
properties:
annotations:
additionalProperties:
type: string
description: Annotations is additional Status fields for the Resource
to save some additional State as well as convey more information to
the user. This is roughly akin to Annotations on any k8s resource,
just the reconciler conveying richer information outwards.
type: object
conditions:
description: Conditions the latest available observations of a resource's
current state. +patchMergeKey=type +patchStrategy=merge
items:
properties:
lastTransitionTime:
description: LastTransitionTime is the last time the condition
transitioned from one status to another. We use VolatileTime
in place of metav1.Time to exclude this from creating equality.Semantic
differences (all other things held constant).
type: string
message:
description: A human readable message indicating details about
the transition.
type: string
reason:
description: The reason for the condition's last transition.
type: string
severity:
description: Severity with which to treat failures of this type
of condition. When this is not specified, it defaults to Error.
type: string
status:
description: Status of the condition, one of True, False, Unknown.
+required
type: string
type:
description: Type of condition. +required
type: string
required:
- type
- status
type: object
type: array
observedGeneration:
description: ObservedGeneration is the 'Generation' of the Service that
was last processed by the controller.
format: int64
type: integer
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
Loading