-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PODAUTO-86: Pull test container dockerfile, e2e targets out of CI into keda repo, reenable CPU test #19
PODAUTO-86: Pull test container dockerfile, e2e targets out of CI into keda repo, reenable CPU test #19
Conversation
@jkyros: This pull request references PODAUTO-86 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
tests/scalers/cpu/cpu_test.go
Outdated
// window straight from the adapter-config configmap in openshift-monitoring, but then the test would be | ||
// tied to a specific metrics server implementation, so we just wait up to 10 minutes for the metrics before | ||
// we proceed with the test. | ||
require.True(t, WaitForHPAMetricsToPopulate(t, kc, deploymentName, testNamespace, 120, 5), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be great addition to upstream imho. (Just remove the openshift specific comments).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @zroubalik ! I made it generic and put up: kedacore#5294, I can adjust it however we need to.
7c32988
to
9d1adf5
Compare
cd tests; go test -v -timeout 15m -tags e2e ./utils/setup_test.go | ||
|
||
.PHONY: e2e-test-openshift | ||
e2e-test-openshift: ## Run tests for OpenShift |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the e2e-test-openshift
require that setup be done? If yes, then you should add that dependency here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it does, but not necessarily with the makefile -- there is a comment like right above that tries to explain it, but we use the same test image for operator and operand CI.
in operator CI, we want to do the setup and teardown with operator-sdk using the bundle, and in operand CI we want to run the setup and cleanup targets because there is no bundle.
So if I add the dependency as you suggest, it would try to run setup every time we did a test run, which we don't want when we test the operator, which is why they aren't marked dependent.
If you have a better idea, I'm open to suggestions 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only option I can think of is to make the code in setup_test.go
to ensure that what is expected is present and if not do what is needed. But might be a bit of work. PS: this without me looking deeply at the code 😉
.PHONY: e2e-test-openshift-setup | ||
e2e-test-openshift-setup: ## Setup the tests for OpenShift | ||
@echo "--- Performing Setup ---" | ||
cd tests; go test -v -timeout 15m -tags e2e ./utils/setup_test.go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cd tests; go test -v -timeout 15m -tags e2e ./utils/setup_test.go | |
go test -v -timeout 15m -tags e2e ./tests/utils/setup_test.go |
Is that not possible? I am sure I am missing some requirement of go test
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is possible, but their entire test doc runs everything out of that tests directory so I thought there was a % chance the test suite might care or make assumptions about how it was being done, so I left it alone: https://github.com/kedacore/keda/tree/main/tests#running-tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, best to leave it be then
@@ -458,6 +459,27 @@ func WaitForAllPodRunningInNamespace(t *testing.T, kc *kubernetes.Clientset, nam | |||
return false | |||
} | |||
|
|||
// Waits until the Horizontal Pod Autoscaler for the scaledObject reports that it has metrics available | |||
// to calculate or until the number of iterations are done. | |||
func WaitForHPAMetricsToPopulate(t *testing.T, kc *kubernetes.Clientset, name, namespace string, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like you can use something like wait.PollUntilContextTimeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh i absolutely can, but the entire rest of their test suite is set up like this, so it was more of a "when in Rome" type of thing doing it this way (at least until we can propose/do some type of cleanup/refactor vs a hodgepodge of approaches).
Per Zbynek's feedback I was going to PR the CPU test part of this (which includes this) upstream.
If you think I should just put it up using wait.PollUntilContextTimeout and let them tell me no, I will 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then it looks like a larger refactor is needed if we want to go the wait
route.
* Pull test container dockerfile out of CI and into keda repo Previously we were building the test container in CI from a dockerfile_literal, which was kind of hacky and more difficult to manage than it being here in the keda repo. This pulls that dockerfile out of CI and into a Dockerfile.tests which we now just reference from CI. * Add Makefile targets to makefile for OpenShift tests We kind of stuffed those tests into CI quick so we had something, and when we did we didn't heavily consider ergonomics. Now that we find ourselves having to enable additional tests for fixes and new features, it will be much easier in the long run if we can manage the test targets here in the repo so we don't have to put in a separate PR to the release repo to see if our changes work. This adds some e2e-test-openshift* makefile targets that we can point and whatever we need to, and once CI is updated, it can just call those targets, whatever they happen to entail. * Reenable CPU scaler test Now that we figured out how the CPU test was broken, we can add it back in to the testing since it's supported. This adds the cpu test into the e2e-test-openshift Makefile target, so when CI calls it, it will run with the rest of the scaler tests
The CPU scaler test assumes a default metrics window of 30s, so those testing on platforms where it is set to a larger value will potentially fail the CPU scaler test because the metrics won't be ready by the time the test starts. This: - Adds a helper that waits for either the metrics to show up in the HPA, or for some amount of time to pass, whichever happens first - Uses said helper to ensure that the metrics are ready before the CPU test starts testing scaling Signed-off-by: John Kyros <jkyros@redhat.com>
9d1adf5
to
2aa3230
Compare
PR'd the CPU test fix upstream, split that commit out to reference upstream PR as a "carry until" . If I need to break the CPU part out into a separate PR I will 😄 |
This sat for awhile, and it shouldn't matter, because we didn't change anything else, but just in case: |
@jkyros: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jkyros, joelsmith The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Originally I was going to just fix the CPU test here and then go turn it back on in CI, but we needed to do something like this anyway at some point so I figured I'd just do it now and save us some time.
This:
Dockerfile.tests
Intent is that this will supercede #18, which I will close in favor of this.
I did test this against the branch directly, and the rehearsals were clean: openshift/release#46762