-
Notifications
You must be signed in to change notification settings - Fork 73
Release 2.0.5 #526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 2.0.5 #526
Conversation
* add log_gamma diagnostic * add missing export for log_gamma * add missing export for gamma_null_distribution, gamma_discrepancy * fix broken unit tests * rename log_gamma module to sbc * add test_log_gamma unit test * add return information to log_gamma doc string * fix typo in docstring, use fixed-length np array to collect log_gammas instead of appending to an empty list
…525) * standardization: add test for multi-input values (failing) This test reveals to bugs in the standarization layer: - count is updated multiple times - batch_count is too small, as the sizes from reduce_axes have to be multiplied * breaking: fix bugs regarding count in standardization layer Fixes #524 This fixes the two bugs described in c4cc133: - count was accidentally updated, leading to wrong values - count was calculated wrongly, as only the batch size was used. Correct is the product of all reduce dimensions. This lead to wrong standard deviations While the batch dimension is the same for all inputs, the size of the second dimension might vary. For this reason, we need to introduce an input-specific `count` variable. This breaks serialization. * fix assert statement in test
Codecov ReportAttention: Patch coverage is
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@paul-buerkner Done. @LarsKue Thanks for the review! I think we can merge when all tests have passed. |
The failing test (model comparison notebook) worked with 7c094c5 and nothing related seems to have changed, so it seems to be flaky for some reason. I also cannot reproduce the error locally, can any of you? The command would be:
I have restarted the two remaining tests, we'll see if they pass now... @LarsKue Do you think it might make sense to split up the slow tests a bit more (e.g., in tests and examples), so we get shorter total runtime by running them in parallel? |
Summary
This bugfix release v2.0.5 contains fixes and minor improvements. Deprecations planned for v2.0.5 are postponed to v2.0.6.
Important
This release contains breaking changes. You will not be able to load models trained with v2.0.4 or earlier.
Breaking Changes
Standardization
layer, used in the approximators.Minor Changes
MultivariateNormalScore
Bug Fixes