-
Notifications
You must be signed in to change notification settings - Fork 73
v2.0.2 #447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2.0.2 #447
Conversation
…ent code Torch is very slow, so I had to increase the timeout accordingly.
* summary networks: add tests for using functional API * fix build functions for use with functional API
* fix docs of coupling flow * add additional tests
In addition, this PR limits the slow test to Windows and Python 3.10. The choices are somewhat arbitrary, my thought was to test the setup not covered as much through use by the devs.
…move update-workflows branch from workflow style tests, add __init__ and conftest to test_point_approximators (#443)
* implement compile_from_config and get_compile_config * add optimizer build to compile_from_config
* remove the is_symbolic_tensor check because this would otherwise skip the whole function for compiled contexts * skip pyabc test * fix sinkhorn and log_sinkhorn message formatting for jax by making the warning message worse
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR releases version v2.0.2 with several improvements and fixes across the BayesFlow codebase including new tests, better handling of input shapes via decorators, refined network layers, improved error handling in distributions, updated workflow configurations, and new issue templates.
- Added sanitize_input_shape decorators to several network functions
- Refactored distributions to use trainable_parameters instead of use_learnable_parameters and adjusted related operations
- Enhanced configuration methods for approximators and updated docs/workflows
Reviewed Changes
Copilot reviewed 51 out of 51 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
bayesflow/networks/transformers/*.py | Added sanitize_input_shape decorators and adjusted imports |
bayesflow/metrics/*.py | Added serializable decorator and minor import adjustments |
bayesflow/distributions/*.py | Refactored parameter naming and tensor operations |
bayesflow/approximators/*.py | Added compile_from_config and get_compile_config methods |
README.md, .github/* | Updated docs, affiliation badge, workflows, and issue templates |
Comments suppressed due to low confidence (3)
bayesflow/distributions/mixture.py:56
- The identifier 'ops' is used here without an import. Please add an import statement (e.g., 'from keras import ops') at the top of the file.
self.mixture_logits = ops.ones(shape=len(distributions))
bayesflow/approximators/model_comparison_approximator.py:121
- The methods compile_from_config and get_compile_config use 'deserialize' and 'serialize' but there is no visible import for these functions. Ensure they are imported (e.g., from bayesflow.utils.serialization) to avoid runtime errors.
def compile_from_config(self, config):
bayesflow/approximators/continuous_approximator.py:107
- Similar to model_comparison_approximator.py, this file uses 'deserialize' and 'serialize' without an explicit import. Please verify that these functions are imported properly.
def compile_from_config(self, config):
- use torch as default backend - reduce range of N so users of jax won't be stuck with a slow notebook - use BayesFlow built-in MLP instead of keras.Sequential solution - general code cleanup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
* v2.0.2 (#447) * [no ci] notebook tests: increase timeout, fix platform/backend dependent code Torch is very slow, so I had to increase the timeout accordingly. * Enable use of summary networks with functional API again (#434) * summary networks: add tests for using functional API * fix build functions for use with functional API * [no ci] docs: add GitHub and Discourse links, reorder navbar * [no ci] docs: acknowledge scikit-learn website * [no ci] docs: capitalize navigation headings * More tests (#437) * fix docs of coupling flow * add additional tests * Automatically run slow tests when main is involved. (#438) In addition, this PR limits the slow test to Windows and Python 3.10. The choices are somewhat arbitrary, my thought was to test the setup not covered as much through use by the devs. * Update dispatch * Update dispatching distributions * Improve workflow tests with multiple summary nets / approximators * Fix zombie find_distribution import * Add readme entry [no ci] * Update README: NumFOCUS affiliation, awesome-abi list (#445) * fix is_symbolic_tensor * remove multiple batch sizes, remove multiple python version tests, remove update-workflows branch from workflow style tests, add __init__ and conftest to test_point_approximators (#443) * implement compile_from_config and get_compile_config (#442) * implement compile_from_config and get_compile_config * add optimizer build to compile_from_config * Fix Optimal Transport for Compiled Contexts (#446) * remove the is_symbolic_tensor check because this would otherwise skip the whole function for compiled contexts * skip pyabc test * fix sinkhorn and log_sinkhorn message formatting for jax by making the warning message worse * update dispatch tests for more coverage * Update issue templates (#448) * Hotfix Version 2.0.1 (#431) * fix optimal transport config (#429) * run linter * [skip-ci] bump version to 2.0.1 * Update issue templates * Robustify kwargs passing inference networks, add class variables * fix convergence method to debug for non-log sinkhorn * Bump optimal transport default to False * use logging.info for backend selection instead of logging.debug * fix model comparison approximator * improve docs and type hints * improve One-Sample T-Test Notebook: - use torch as default backend - reduce range of N so users of jax won't be stuck with a slow notebook - use BayesFlow built-in MLP instead of keras.Sequential solution - general code cleanup * remove backend print * [skip ci] turn all single-quoted strings into double-quoted strings * turn all single-quoted strings into double-quoted strings amend to trigger workflow --------- Co-authored-by: Valentin Pratz <git@valentinpratz.de> Co-authored-by: Valentin Pratz <112951103+vpratz@users.noreply.github.com> Co-authored-by: stefanradev93 <stefan.radev93@gmail.com> Co-authored-by: Marvin Schmitt <35921281+marvinschmitt@users.noreply.github.com> * drafting feature * Initialize projectors for invariant and equivariant DeepSet layers * implement requested changes and improve activation --------- Co-authored-by: Lars <lars@kuehmichel.de> Co-authored-by: Valentin Pratz <git@valentinpratz.de> Co-authored-by: Valentin Pratz <112951103+vpratz@users.noreply.github.com> Co-authored-by: stefanradev93 <stefan.radev93@gmail.com> Co-authored-by: Marvin Schmitt <35921281+marvinschmitt@users.noreply.github.com>
compile_from_config
when loading approximators