-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
maint(test): add optional dependency tests to actions #20532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
✅ Hi, I am the SymPy bot (v161). I'm here to help you write a release notes entry. Please read the guide on how to write release notes.
Click here to see the pull request description that was parsed. |
|
FWIW I think even with GHA it would be simpler to keep the bulk of the CI logic in a separate script like we have with test_travis.sh. It's much easier to deal with bash logic in a proper bash script than in a YAML file. |
|
I guess doing it this way causes things to split up in the logs. I wonder if there's a way to get the best of both worlds with this. |
|
I guess for stuff like this that's "oneliners" we can put it in the YAML file, but for longer stuff, like most of what is still in test_travis.sh, we should keep a bash script. |
|
Also worth noting (from https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions#jobsjob_idsteps):
So things should only be split out like this if they really are separate. |
I could make a |
|
If .py seems better than .sh that's fine. The main thing I take issue with is having complex code in a YAML file. Actually test_travis already contains a lot of Python in strings, which has the same problems. So moving it to a Python script is probably better anyway. |
Codecov Report
@@ Coverage Diff @@
## master #20532 +/- ##
=============================================
+ Coverage 75.741% 75.762% +0.020%
=============================================
Files 673 673
Lines 174437 174499 +62
Branches 41200 41205 +5
=============================================
+ Hits 132122 132205 +83
+ Misses 36597 36575 -22
- Partials 5718 5719 +1 |
🟠Hi, I am the SymPy bot (v161). I've noticed that some of your commits add or delete files. Since this is sometimes done unintentionally, I wanted to alert you about it. This is an experimental feature of SymPy Bot. If you have any feedback on it, please comment at sympy/sympy-bot#75. The following commits add new files: If these files were added/deleted on purpose, you can ignore this message. |
|
@asmeurer does this look okay to you? There is clearly room for improvement in this but just shifting everything over to Actions is top priority right now and this is only one part of that. |
|
It looks good. I think Travis still has the dev docs deployment, but is there anything else left? For the time being, I would just disable every build on Travis except for the dev docs deployment build (and make it only run on master and not be required to merge). |
I haven't included the docs deployment yet. The other things not included are:
I wonder how many of these we need now. We should certainly test at least the minimum and maximum supported Python versions but we don't necessarily need to test all versions in between. I think that coverage measurement is useful but also not critical if we're in a rush. I'm not that impressed with codecov so I wonder if there's a better way. We can have artifacts in actions so maybe we should just use those to show diff coverage somehow... I'm not sure how long we need to keep the Py2 test. The benchmarks are useful sometimes. I'd like to find a way of making the output from the benchmarks more noticeable. Currently on Travis the benchmarks job always passes. I would like to get to a point where CI picks up on slowdowns but we don't have that with Travis so it can't be critical in the transition to actions. I think we can just ditch the Tensorflow 1.x tests. I can't remember the last time I saw anything meaningful from the pypy tests. Maybe they were more useful before but they don't seem that worthwhile to me. |
|
PyPy used to fail a lot. I guess that doesn't happen any more? I agree I personally don't find codecov useful (partly because of #16196), but perhaps others do. Testing the benchmarks in a way that actually gives useful benchmarks is a harder problem. It might be easier with actions since we can use self hosted runners. I would worry about this later, though. I thought the point of the benchmarks run was just to make sure the benchmarks don't break. I have no idea what the situation is with tensorflow. The py2 test is very simple, and I think it's worth keeping for the time being.
I'm not sure. But we should also consider changing our Python version policy to match something like NEP 19, where we drop Python versions earlier. The issue is that Python has increased the cadence of releases, so our current policy of supporting Python versions that are still supported by Python itself means we now support 5 Python versions at once (see https://www.python.org/dev/peps/pep-0602/). Regarding docs deployment, I'm working on drdoctr/doctr#372. |
|
So the main things needed are:
That should be enough to switch off Travis. I agree we should follow NEP 19. Deploying the docs might be easier with actions. |
References to other Issues or PRs
See #20374
Brief description of what is fixed or changed
Adds optional dependency tests to github actions.
Other comments
Release Notes
NO ENTRY