Skip to content
This repository has been archived by the owner on Feb 7, 2023. It is now read-only.

Continue to run test suite even if one test fails #375

Open
mike-nguyen opened this issue Apr 11, 2018 · 1 comment
Open

Continue to run test suite even if one test fails #375

mike-nguyen opened this issue Apr 11, 2018 · 1 comment

Comments

@mike-nguyen
Copy link
Collaborator

mike-nguyen commented Apr 11, 2018

Problem

Ansible will fail if any task fails by default. This is great if using Ansible as a configuration tool, but not so great if using Ansible as a test framework. In Ansible 2.0, the block/rescue/always concept was introduced--allowing us to catch errors and continue to do other things. So why not do that with a set of tests?

Benefits Over Current Approach

  • By catching errors and continuing to run tests, users are able to get more test results and could find more bugs sooner. This is especially true when there is a bug that is being fixed but not yet put into a compose.
  • All done natively in Ansible

Proposal

  1. Separate tests into individual yml files
  2. Create a role that will import the above yml files in a block/rescue that will save the test status in a dictionary.
  3. Create an entry point playbook, that calls the above role for each of the tests and pass/fail based on whats in the dictionary.

See simple example here: master...mike-nguyen:tester_poc

Potential Issues

@miabbott
Copy link
Collaborator

Could you also add a section for the benefits to this approach?

Being able to be more resilient to failures would greatly improve our ability to run tests, so I'm in favor of that portion.

To the point of the improved-sanity-test, we can always change how that test is laid out. Maybe have a single sanity.yml that does multiple import_playbook? Alternatively, just make the sanity test one giant playbook instead of splitting it up into three playbooks in the file itself.

mike-nguyen added a commit to mike-nguyen/atomic-host-tests that referenced this issue May 24, 2018
Convert the k8-cluster test to the `continue on failure.  Addresses
issue projectatomic#375.
mike-nguyen added a commit to mike-nguyen/atomic-host-tests that referenced this issue May 24, 2018
Convert the k8-cluster test to the `continue on failure.  Addresses
issue projectatomic#375.
miabbott pushed a commit that referenced this issue May 29, 2018
Convert the k8-cluster test to the `continue on failure.  Addresses
issue #375.
miabbott added a commit to miabbott/atomic-host-tests that referenced this issue May 29, 2018
This changes the `rpm-ostree` test suite to use the 'continue on
failure' model.

Partially addresses projectatomic#375
mike-nguyen added a commit to mike-nguyen/atomic-host-tests that referenced this issue May 29, 2018
Convert docker-build-httpd to use the continue on failure model
proposed by Issue projectatomic#375.
miabbott pushed a commit that referenced this issue May 31, 2018
* docker-build-httpd: continue on failure

Convert docker-build-httpd to use the continue on failure model
proposed by Issue #375.
mike-nguyen pushed a commit that referenced this issue Jun 1, 2018
* gitignore: ignore .log files

* rpm-ostree: use 'continue on fail' model

This changes the `rpm-ostree` test suite to use the 'continue on
failure' model.

Partially addresses #375

* rpm-ostree/livefs: use global variable name

* rpm-ostree/compose: make process more resilient

This worked well on a pristine system, but I was getting tripped up
when re-using a test sytsem.  This cleans up some of the transient
artifacts that can be generated by the compose.

Additionally, I added a `wait_for` to make sure the HTTP server
starts.

* rpm-ostree/livefs: use include_role in a few places

We were seeing instances where variable values passed to `import_role`
where not being respected each time it was called.  I thought that
each time an 'import' was done the changed variables would be
respected, but that doesn't appear to be the case.

Using `include_role` will allow us to dynamically load in variables
each time the role is used.  This is kind of troubling as we made a
large switch to using `include_role` in most places, but we've not run
into any problems like this yet.

* rpm-ostree: explicitly check for failure

This is a forward-port of functionality from #405 which corrects an
assumption about how Ansible was handling failures.

* move 'ostree admin status' to sanity test

In #381, a sanity check for `ostree admin status` was added to the
`rpm-ostree` test suite.  It is probably better suited to the 'sanity'
test suite so we are able to catch any regressions earlier in the
testing cycle.

I created a dumb role for the `ostree admin status` command since that
was easier than trying to wedge it into an existing role.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants