Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: periodically run the unit tests of all GitHub-hosted published charms #1365

Merged
merged 24 commits into from
Sep 17, 2024

Conversation

tonyandrewmeyer
Copy link
Contributor

Once a month, before the typical release time, run a workflow that tests all charms published on CharmHub where the source location is in the charm metadata and that location is in a canonical GitHub repository.

This is much the same as the existing data, hello, and observability tests, but at a larger scale and without the expectation that all of the tests would pass in every PR. The intention is that this gives us an insight into when ops changes break the most important charms - not that we would necessarily ensure that the tests were always passing (but we would at least look into it).

This also tests the charmcraft init profiles (other than the framework profiles, which don't start out with usable tests) to detect breakage there.

To avoid manually maintaining the list of charms, a script is added that will update the workflow file with the latest list, pulled from CharmHub. This is not currently automatically run.

@benhoyt
Copy link
Collaborator

benhoyt commented Sep 9, 2024

Nice! Do you have an example run?

@tonyandrewmeyer
Copy link
Contributor Author

tonyandrewmeyer commented Sep 9, 2024

Nice! Do you have an example run?

Yes, on my fork.

There are a few failures:

  • temporal-k8s (needs updating for the Pebble plan coming back with checks)
  • temporal (looks like basically the same as the -k8s one)
  • data-platform-tests (needs this PR, which I guess I should try pushing someone to merge)
  • mysql-router-k8s (96 failures of one test - haven't dug into this)

@tonyandrewmeyer tonyandrewmeyer marked this pull request as ready for review September 10, 2024 02:06
Copy link
Collaborator

@benhoyt benhoyt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Left a few comments. Have you looked into the failures? Be nice to start with a green slate if we can.

.github/workflows/published-charms-tests.yaml Show resolved Hide resolved
.github/update-published-charms-tests-workflow.py Outdated Show resolved Hide resolved
.github/update-published-charms-tests-workflow.py Outdated Show resolved Hide resolved
.github/workflows/published-charms-tests.yaml Outdated Show resolved Hide resolved
Co-authored-by: Ben Hoyt <benhoyt@gmail.com>
@tonyandrewmeyer
Copy link
Contributor Author

Have you looked into the failures? Be nice to start with a green slate if we can.

  • data-platform-libs: I need to convince someone to merge my ~9 month old PR.
  • temporal[-k8s]: I can open a PR to get them updated to more recent ops - it should just be expecting the checks in the Pebble plan
  • mysql-router-k8s: I'll look into it - they have thousands of tests that default to running 120 at a time which does not work well on my system but I guess here I can just run the failing one to investigate.

@tonyandrewmeyer
Copy link
Contributor Author

  • data-platform-libs: I need to convince someone to merge my ~9 month old PR.

Rebased the PR and slightly improved it, and posted in Matrix to see if someone can work with me to get it merged.

  • temporal[-k8s]: I can open a PR to get them updated to more recent ops - it should just be expecting the checks in the Pebble plan
  • mysql-router-k8s: I'll look into it - they have thousands of tests that default to running 120 at a time which does not work well on my system but I guess here I can just run the failing one to investigate.

Ah, this is the one about the change for relation-broken not including the broken relation in the list any more. That's actually the Juju behaviour by now, so the charm really ought to be upgraded. This is non-trivial, but I'll take a look tomorrow and see if I can figure it out in an hour or so.

@@ -0,0 +1,132 @@
#! /usr/bin/env python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It wasn't obvious to me that this script needed to be run manually 🙈

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would you recommend here? I could add a section to CONTRIBUTING.md that explains that this exists and how to update it, or I could just add that in the docstring for the script itself. Or I guess it could be run automatically on a schedule, which is probably cleanest, but I wasn't sure if we were sold enough on this idea to do that straight off.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think manual updates are a good start. We can always run this automatically later if desired.

@@ -3,6 +3,7 @@ name: TIOBE Quality Checks
on:
schedule:
- cron: '0 7 1 * *'
workflow_dispatch:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm not sure what the story is with this, because main@HEAD has:

on:
  workflow_dispatch:
  schedule:
    - cron:  '0 7 1 * *'

So I don't understand why there's no complaint about a conflict, or the workflow_dispatch line being both above and below (and I could just remove the below one, which I must have had in an earlier version in my fork). I'll try to figure out what's happening here.

Copy link
Collaborator

@benhoyt benhoyt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks. Let's see how this goes!

@@ -0,0 +1,132 @@
#! /usr/bin/env python
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think manual updates are a good start. We can always run this automatically later if desired.

@tonyandrewmeyer tonyandrewmeyer merged commit fdb2338 into canonical:main Sep 17, 2024
32 checks passed
@tonyandrewmeyer tonyandrewmeyer deleted the ci-all-listed-packages branch September 17, 2024 22:03
tonyandrewmeyer added a commit to tonyandrewmeyer/operator that referenced this pull request Oct 4, 2024
…arms (canonical#1365)

Once a month, before the typical release time, run a workflow that tests
all charms published on CharmHub where the source location is in the
charm metadata and that location is in a `canonical` GitHub repository.

This is much the same as the existing data, hello, and observability
tests, but at a larger scale and without the expectation that all of the
tests would pass in every PR. The intention is that this gives us an
insight into when ops changes break the most important charms - not that
we would necessarily ensure that the tests were always passing (but we
would at least look into it).

This also tests the `charmcraft init` profiles (other than the framework
profiles, which don't start out with usable tests) to detect breakage
there.

To avoid manually maintaining the list of charms, a script is added that
will update the workflow file with the latest list, pulled from
CharmHub. This is not currently automatically run.

---------

Co-authored-by: Ben Hoyt <benhoyt@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants