-
-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Add GHA setup #702
CI: Add GHA setup #702
Conversation
Codecov ReportBase: 100.00% // Head: 99.46% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #702 +/- ##
===========================================
- Coverage 100.00% 99.46% -0.54%
===========================================
Files 113 113
Lines 14532 14532
Branches 2959 2787 -172
===========================================
- Hits 14532 14454 -78
- Misses 0 68 +68
- Partials 0 10 +10
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
d843b13
to
d2c868e
Compare
Unfortunately, the build lacks a few bits of coverage on [1] https://app.codecov.io/gh/caronc/apprise/commit/d2c868ee5141ac88216b8a70235353b9939dbe34 |
It's a really amazing idea idea. I love the idea of dropping 3rd party dependencies. But would it still support |
Just another thought (thinking out-loud here). The build status, coverage status and the ability to just kick a runner if it fails to try again is one positive with the current setup. I'd be curious how that changes under the new? |
- name: Run tests | ||
run: | | ||
coverage run -m pytest | ||
|
||
- name: Process coverage data | ||
run: | | ||
coverage combine | ||
coverage xml | ||
coverage report | ||
|
||
- name: Upload coverage data | ||
uses: codecov/codecov-action@v3 | ||
with: | ||
files: ./coverage.xml | ||
flags: unittests | ||
env_vars: OS,PYTHON,BARE | ||
name: codecov-umbrella | ||
fail_ci_if_error: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would [the new setup] still support
coverage
, or would we drop that too and just make sure tests fail if coverage isn't met?
Code coverage tracking and reporting to Codecov is fully supported and already works on the GHA builds on your repository. The semantics of all program invocations did not change at all, and the GHA recipe is merely a translation of the Travis CI + Tox recipe.
Codecov already sent a report back on this PR at #702 (comment), while Travis CI was out of service.
flags: unittests | ||
env_vars: OS,PYTHON,BARE | ||
name: codecov-umbrella | ||
fail_ci_if_error: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will probably want to set fail_ci_if_error: true
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a minor flaw with Codecov we also experienced recently. We reported about it at crate/crate-python#451. The verified solution is to send the right CODECOV_TOKEN
along, even if the repository is public and would actually not need it.
Unless the repository is configured that way, I will keep fail_ci_if_error: false
.
[2022-10-18T08:51:11.864Z] ['error'] There was an error running the uploader:
Error uploading to [https://codecov.io:](https://codecov.io/)
Error: There was an error fetching the storage URL during POST:
404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API.
Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
-- https://github.com/caronc/apprise/actions/runs/3271932392/jobs/5382320635#step:12:43
Are you referring to the badges? GHA's will be 1, and Codecov stays the same.
On failed builds, you have the possibility to re-run all jobs of your workflow, or only the ones that failed. Other than this, if you add a on:
workflow_dispatch: There is also a feature to run nightly or weekly checks, or any other cron-like schedule. But I think you know this already, it is also used within on:
schedule:
- cron: '0 2 * * *' Footnotes |
.github/workflows/tests.yml
Outdated
# Run all jobs to completion (false), or cancel | ||
# all jobs once the first one fails (true). | ||
fail-fast: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a good default is to use fail-fast: true
. I usually only disable failing fast, if I want to get the whole picture at once about all failing tests on all environments. It is very useful if the test suite works in general, but croaks on specific environments, and you want to find out about them in one run.
In daily business, it will save a bunch of resources when the jobs on the other environments get canceled, if one environment fails.
So, I am proposing to fail-fast by default. Do you agree?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see harm in the idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I've updated the patch accordingly.
matrix: | ||
os: [ | ||
"ubuntu-latest", | ||
# "macos-latest", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After bringing in #700, I think it will probably be safe to enable testing on macOS on CI right away, if desired.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is equally important to add Windows to CI, to protect against basic errors like 601f9af. Further efforts are made with #707 in this area.
On the other hand, I believe it will be crazy to run macOS- and Windows-tests on the whole test matrix. I think it will be good enough to restrain them to Python 3.10 only, for example.
eb34249
to
5e3da19
Compare
It may save some credits on Travis CI.
I think this patch will be ready to go, if you are happy with it. |
Done! 🚀👍 |
Dear Chris,
after learning that CI builds regularly exhaust credits on Travis CI (I think that also currently happened at #700), this patch adds a corresponding configuration for GHA. It may save some credits, and aims to mitigate the hassle having to renew them regularly, and manually.
The builds just succeeded for the first time at [1-4], looking good so far. I think GHA performs well, with a total runtime of ~30 minutes, where it takes ~40 minutes on Travis CI.
With kind regards,
Andreas.
[1] https://github.com/panodata/apprise/actions/runs/3257301254/usage (33 minutes runtime)
[2] https://github.com/caronc/apprise/actions/runs/3257334731/usage (34 minutes runtime)
[3] https://github.com/caronc/apprise/actions/runs/3257397087/usage (29 minutes runtime)
[4] https://github.com/caronc/apprise/actions/runs/3257501205/usage (30 minutes runtime)