-
Notifications
You must be signed in to change notification settings - Fork 188
Reuse npse_trained_model fixure for test_npse_map #1461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Reuse npse_trained_model fixure for test_npse_map #1461
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1461 +/- ##
===========================================
- Coverage 89.60% 78.70% -10.90%
===========================================
Files 121 127 +6
Lines 9357 10025 +668
===========================================
- Hits 8384 7890 -494
- Misses 973 2135 +1162
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks!
See question in the comments.
What is the speed up you notice through this change?
CI is failing due to |
Basically, we save the time of training a model specifically for `test_npse_map'. But now it would be much cheaper to add more tests that can reuse specific trained models. |
One concern I have @vagechirkov is that now all the tests here are marked as slow, right? Is there a way to have one of the |
Hey @janfb, I think this is more or less how the final test structure should look like, so ready for review 🙃 |
Cool, I will review it now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks a lot @schroedk and @vagechirkov ! 👏
it all looks very clean and efficient now. Added a couple of questions for my understanding and some suggestions
An issue with snapshot tests: syrupy-project/syrupy#438 |
Handle combinations
…ation" This reverts commit df39484.
@vagechirkov @janfb my suggestion for this PR would be:
This would not show the refactoring of the test (which we aimed for at the beginning), but at least it shows the usage of pytest-regression. What do you think? |
Alright, I think I’ve finally tracked down the issue 🎉🫠 (was seriously starting to question my sanity). Turns out it’s all about the combination of parameters — nothing to do with how we wrote the tests 🤪 I used the
This means, in turn, that this seems to be a separate issue. What do you think, @janfb? P.S. |
…ile structure with subdirecotries for each test function (the actual regression files are per case)
* make test cases more readable * test more consistently by calling same subroutines
…tion for pytest fixtures
# ToDO check if there is a bug for prior_type='uniform' and num_trial>1 | ||
# ToDO validate bug for combination sde_type='ode' and num_trial>1 | ||
# ToDO investigate non-determinism for prior_type=None, dim>= 2, ve, auto/jac_gauss |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering if we should mark these tests (non-deterministic and those that take too long) as xfail
and set the timeout to, say, 20 seconds. This way, we would cover all possible combinations with the regression tests, and failing tests could be resolved later in the separate PRs.
* extract map test
#1428