-
Notifications
You must be signed in to change notification settings - Fork 43
Declarative integration test framework #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declarative integration test framework #40
Conversation
test/declarative_test_fwk/model.py
Outdated
| def description(self) -> str: | ||
| return self.config.get("description", "") | ||
|
|
||
| def run(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Run logic can be moved to separate file (example: runner.py) later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added runner.py
08fddc2 to
3420ded
Compare
|
/test elasticache-kind-e2e |
|
Super interesting ideas explored here. Love the idea of moving a lot of the testing to declarative, as most tests run in a fairly consistent CRUD manner. Just have some questions about implementation choices:
|
7417bba to
2cfab5f
Compare
|
Thanks for your inputs @RedbackThomson
In the initial revision of this PR, I considered each step to have custom response details specified in it. It was for flexibility so that scenarios that require multiple CRs, for example pre-req dependency CRs (example: User/UserGroup CR for ReplicationGroup CR), can have steps to create them. This information is mixed in into each step such that details specified in step takes precedence. |
2cfab5f to
993edbe
Compare
Yes, the |
I considered it to allowlist specific custom resource types in a test suite run. This is similar to service marker that we have currently, but its scope is further finegrained to custom resource type. As single python file is used for all test scenarios, I expect it to provide input configurability to run subset of scenarios from entire test suite. |
Following help analyze test step failure:
|
13d963d to
dadc204
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kumargauravsharma once again, I'm diggin' the move to declarative testing. ++ on that!
I have some suggestions inline that I think might make the test scenarios easier to write and read.
test/declarative_test_fwk/model.py
Outdated
| """ | ||
| Represents a declarative test step | ||
| """ | ||
| indent = "\t\t" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed the unused variable.
| @@ -0,0 +1,40 @@ | |||
| id: "RG_CMD_basic_create_update" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recommend naming the file test/e2e/scenarios/replication_group/cmd_basic_create_update.yaml or something like that instead of scenario1.yaml :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated file name.
test/e2e/scenarios/scenario1.yaml
Outdated
| usecases: | ||
| - replication_group | ||
| - cmd | ||
| #marks: | ||
| # - slow | ||
| # - blocked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the difference between usecases and marks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some details related to it at: #40 (comment)
I think currently marks are used to include tests that are by default excluded otherwise.
The intent of usecases was to have 'selector'/'filter' behavior such that when it is specified to pytest run command, only tests that match the supplied usecase are included in the test run.
Note: the behavior is not yet implemented.
| spec: | ||
| engine: redis | ||
| replicationGroupID: test$RANDOM_SUFFIX | ||
| replicationGroupDescription: Basic create and update of CMD RG | ||
| cacheNodeType: cache.t3.micro | ||
| numNodeGroups: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is cool to allow spec values directly in the scenario YAML definition. I'd recommend also allowing to load from a filepath as wel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated code. create, patch, delete now support reading custom resource from filepath.
Filepath is resolved by taking resource_directory parameter into account:
loader.load_scenario(scenario_file_path, resource_directory, input_replacements)
Example:
steps:
- id: "create_CMD_replication_group"
description: "Initial config"
create: "replicationgroup_cmd.yaml"
| apiVersion: $CRD_GROUP/$CRD_VERSION | ||
| kind: ReplicationGroup | ||
| metadata: | ||
| name: test$RANDOM_SUFFIX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where does $RANDOM_SUFFIX come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Method:
load_scenario(scenario_file: Path, resource_directory: Path = None, replacements: dict = {}) -> model.Scenario
sets RANDOM_SUFFIX in replacements dictionary before loading the scenario file.
Each scenario gets different random suffix.
test/declarative_test_fwk/helper.py
Outdated
| # conditions expectations met, now check current resource against expectations | ||
| resource = k8s.get_resource(reference) | ||
| self.assert_items(expectations.get("status"), resource.get("status")) | ||
|
|
||
| # self._assert_state(expectations.get("spec"), resource) # uncomment to support spec assertions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is kinda funky because you are reading the CR from the Kubernetes API server multiple times (hidden to the user) but the test scenario acts like what is being asserted is simply the returned CR from the Create or Patch operation.
I feel like a more robust solution, if slightly more verbose, would be to have an explicit wait part of the Step structure that occurs before the expect stanza.
So, instead of this:
steps:
- id: "create_CMD_replication_group"
description: "Initial config"
create:
spec:
engine: redis
replicationGroupID: test$RANDOM_SUFFIX
replicationGroupDescription: Basic create and update of CMD RG
cacheNodeType: cache.t3.micro
numNodeGroups: 1
expect:
status:
conditions:
ACK.ResourceSynced: "True"
status: "available"You would do something like this:
steps:
- id: "create_CMD_replication_group"
description: "Initial config"
create:
spec:
engine: redis
replicationGroupID: test$RANDOM_SUFFIX
replicationGroupDescription: Basic create and update of CMD RG
cacheNodeType: cache.t3.micro
numNodeGroups: 1
wait:
status:
conditions:
ACK.ResourceSynced:
value: "True"
timeout: 30
# These assertions are made on the resulting CR from a Kuberetes Describe operation
# after all waits have successfully completed
expect:
status: "available"That way, it's more explicit that there are actually distinct reads to the Kubernetes API server, with one set of reads having a retry/wait semantic and the other not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. If we add a wait field, then assert_expectations just needs one describe call and can perform all the assertions using the results of that.
The way the wait is specified can also potentially be simplified:
wait:
status:
conditions:
ACK.ResourceSynced:
value: "True"
timeout: 30
If we decide that all we will ever wait on is the resource synced condition, then it could be as simple as:
waitResourceSynced: 30
or something in between.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated code.
wait can be specified as:
- Wait for 5 minutes
wait: 5
- Wait on condition expected value - waits for default timeout (30 minutes)
wait:
status:
conditions:
ACK.Terminal: "True"
I kept the wait details open ended, so that support to wait on other properties of custom resource can be be added later.
- Wait on specific condition details with optional
timeout:
wait:
status:
conditions:
ACK.ResourceSynced:
status: "True"
timeout: 30 # timeout in minutes, if not specified defaults to 30 minutes
Assert on condition message:
Conditions can be specified with expected message (both in wait as well as expect):
status:
conditions:
ACK.Terminal: "True"
ACK.ResourceSynced:
status: "False"
message: "Expected message ..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Wait for 5 minutes
Seconds, not minutes :)
dadc204 to
52a35ee
Compare
10696b8 to
1dbc9a8
Compare
|
/test elasticache-kind-e2e |
2 similar comments
|
/test elasticache-kind-e2e |
|
/test elasticache-kind-e2e |
1dbc9a8 to
1e68eda
Compare
|
/retest |
|
/test elasticache-kind-e2e |
1e68eda to
cefde7c
Compare
|
/test elasticache-kind-e2e |
cefde7c to
42cf4a9
Compare
|
/retest |
42cf4a9 to
dc6eb7e
Compare
…xpectations, addressed comments
dc6eb7e to
be3a53a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really good @kumargauravsharma ! I left one small question
| reference = self.custom_resource_reference(input_data, input_replacements) | ||
| _ = k8s.create_custom_resource(reference, input_data) | ||
| resource = k8s.wait_resource_consumed_by_controller(reference, wait_periods=10) | ||
| assert resource is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it OK to use assert statements in non unit test files? I'm not very experienced with python but i always thought that it's discouraged to use assert in libraries/functions that be imported by other projects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the file is used in a test context I used the assert so that it helps in debug if it fails.
As we use this framework in e2e tests, we can evolve the code to keep/remove it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this looks like it's solid. I still stand by my comment for the namePrefix part, but I'm happy with everything else.
| apiVersion: $CRD_GROUP/$CRD_VERSION | ||
| kind: ReplicationGroup | ||
| metadata: | ||
| name: test$RANDOM_SUFFIX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any comments on this @kumargauravsharma ? I still feel it leaves room for mistakes as it currently exists. If everyone is just going to have to append $RANDOM_SUFFIX anyway, then lets just do it for them?
|
@RedbackThomson For the current PR, I think we can keep the current logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kumargauravsharma, nmvk, RedbackThomson The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@kumargauravsharma Alright. I'll take your word for that. Let's ship it |
Issue #, if available:
Create, Read, Update, Delete Integration tests across custom resources scenarios contain test code, fixtures that are repetitive across test files.
Description of changes:
A declarative integration test framework to write integration test as yaml and a reusable common logic for their execution:
$SOME_VARIABLE) which are replaced during test runResourceHelperauto discovery using@resource_helperdecorator.sessionscoped fixture for service bootstrap resources and input replacements, removing the need for call toservice_bootstrap.pyandservice_cleanup.pyfromtest-inframade here and here.Declarative integration test framework (
test/declarative_test_fwk/) is reusable, this PR is reviewed and merged, it can be moved totest-infralater.It can be enhanced further as needed.
Refer
test/e2e/declarative_tests.pyfile in this PR for using declarative integration test framework.test/e2e/scenarios/scenario1.yamlrepresent integration test scenario as declarative yamlBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.