-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PODAUTO-86: Pull test container dockerfile, e2e targets out of CI into keda repo, reenable CPU test #19
PODAUTO-86: Pull test container dockerfile, e2e targets out of CI into keda repo, reenable CPU test #19
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# This dockerfile is intended to build something approximating the upstream test | ||
# image, but for use in OpenShift CI. Upstream keda bumps the keda-tools image version | ||
# when they feel like it, so we might just need to pay attention when we do releases. | ||
FROM ghcr.io/kedacore/keda-tools:1.20.5 | ||
COPY . /src | ||
RUN chmod 777 -R /src | ||
WORKDIR /src | ||
RUN mkdir -p /go && chmod 777 -R /go | ||
RUN mkdir -p /bin && chmod 777 -R /bin | ||
ENV GOCACHE=/src/.cache | ||
ENV USE_SUDO=false | ||
ENV PATH=/src/helm:/src/:$PATH | ||
ENV HELM_INSTALL_DIR=/src | ||
RUN curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && chmod 700 get_helm.sh && ./get_helm.sh |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -119,6 +119,28 @@ e2e-test-clean-crds: ## Delete all scaled objects and jobs across all namespaces | |
e2e-test-clean: get-cluster-context ## Delete all namespaces labeled with type=e2e | ||
kubectl delete ns -l type=e2e | ||
|
||
# The OpenShift tests are split into 3 targets because when we test the CMA operator, | ||
# we want to do the setup and cleanup with the operator, but still run the test suite | ||
# from the test image, so we need that granularity. | ||
.PHONY: e2e-test-openshift-setup | ||
e2e-test-openshift-setup: ## Setup the tests for OpenShift | ||
@echo "--- Performing Setup ---" | ||
cd tests; go test -v -timeout 15m -tags e2e ./utils/setup_test.go | ||
|
||
.PHONY: e2e-test-openshift | ||
e2e-test-openshift: ## Run tests for OpenShift | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it does, but not necessarily with the makefile -- there is a comment like right above that tries to explain it, but we use the same test image for operator and operand CI. in operator CI, we want to do the setup and teardown with operator-sdk using the bundle, and in operand CI we want to run the setup and cleanup targets because there is no bundle. So if I add the dependency as you suggest, it would try to run setup every time we did a test run, which we don't want when we test the operator, which is why they aren't marked dependent. If you have a better idea, I'm open to suggestions 😄 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The only option I can think of is to make the code in |
||
@echo "--- Running Internal Tests ---" | ||
cd tests; go test -p 1 -v -timeout 60m -tags e2e $(shell cd tests; go list -tags e2e ./internals/... | grep -v internals/global_custom_ca) | ||
@echo "--- Running Scaler Tests ---" | ||
cd tests; go test -p 1 -v -timeout 60m -tags e2e ./scalers/cpu/... ./scalers/kafka/... ./scalers/memory/... ./scalers/prometheus/... | ||
@echo "--- Running Sequential Tests ---" | ||
cd tests; go test -p 1 -v -timeout 60m -tags e2e ./sequential/... | ||
|
||
.PHONY: e2e-test-openshift-clean | ||
e2e-test-openshift-clean: ## Cleanup the test environment for OpenShift | ||
@echo "--- Cleaning Up ---" | ||
cd tests; go test -v -timeout 60m -tags e2e ./utils/cleanup_test.go | ||
|
||
.PHONY: smoke-test | ||
smoke-test: ## Run e2e tests against Kubernetes cluster configured in ~/.kube/config. | ||
./tests/run-smoke-tests.sh | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,6 +10,7 @@ import ( | |
"crypto/rsa" | ||
"crypto/x509" | ||
"crypto/x509/pkix" | ||
"encoding/json" | ||
"encoding/pem" | ||
"fmt" | ||
"io" | ||
|
@@ -458,6 +459,33 @@ func WaitForAllPodRunningInNamespace(t *testing.T, kc *kubernetes.Clientset, nam | |
return false | ||
} | ||
|
||
// Waits until the Horizontal Pod Autoscaler for the scaledObject reports that it has metrics available | ||
// to calculate, or until the number of iterations are done, whichever happens first. | ||
func WaitForHPAMetricsToPopulate(t *testing.T, kc *kubernetes.Clientset, name, namespace string, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Seems like you can use something like wait.PollUntilContextTimeout. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh i absolutely can, but the entire rest of their test suite is set up like this, so it was more of a "when in Rome" type of thing doing it this way (at least until we can propose/do some type of cleanup/refactor vs a hodgepodge of approaches). Per Zbynek's feedback I was going to PR the CPU test part of this (which includes this) upstream. If you think I should just put it up using wait.PollUntilContextTimeout and let them tell me no, I will 😄 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Then it looks like a larger refactor is needed if we want to go the |
||
iterations, intervalSeconds int) bool { | ||
totalWaitDuration := time.Duration(iterations) * time.Duration(intervalSeconds) * time.Second | ||
startedWaiting := time.Now() | ||
for i := 0; i < iterations; i++ { | ||
t.Logf("Waiting up to %s for HPA to populate metrics - %s so far", totalWaitDuration, time.Since(startedWaiting).Round(time.Second)) | ||
|
||
hpa, _ := kc.AutoscalingV2().HorizontalPodAutoscalers(namespace).Get(context.Background(), name, metav1.GetOptions{}) | ||
if hpa.Status.CurrentMetrics != nil { | ||
for _, currentMetric := range hpa.Status.CurrentMetrics { | ||
// When testing on a kind cluster at least, an empty metricStatus object with a blank type shows up first, | ||
// so we need to make sure we have *actual* resource metrics before we return | ||
if currentMetric.Type != "" { | ||
j, _ := json.MarshalIndent(hpa.Status.CurrentMetrics, " ", " ") | ||
t.Logf("HPA has metrics after %s: %s", time.Since(startedWaiting), j) | ||
return true | ||
} | ||
} | ||
} | ||
|
||
time.Sleep(time.Duration(intervalSeconds) * time.Second) | ||
} | ||
return false | ||
} | ||
|
||
// Waits until deployment ready replica count hits target or number of iterations are done. | ||
func WaitForDeploymentReplicaReadyCount(t *testing.T, kc *kubernetes.Clientset, name, namespace string, | ||
target, iterations, intervalSeconds int) bool { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that not possible? I am sure I am missing some requirement of
go test
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is possible, but their entire test doc runs everything out of that tests directory so I thought there was a % chance the test suite might care or make assumptions about how it was being done, so I left it alone: https://github.com/kedacore/keda/tree/main/tests#running-tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, best to leave it be then