Skip to content

Stateless adapters #536

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: dev
Choose a base branch
from
Open

Stateless adapters #536

wants to merge 18 commits into from

Conversation

stefanradev93
Copy link
Contributor

This PR implements the transition towards stateless adapters, as announced in v2.0.4. In particular, it:

  • Removes the stage argument from data sets and all data set builders.
  • Remove the stage argument from the Adapter and all callers.
  • Removes adaptive mean computation from the standardize transform, as the functionality is now superseded by the Standardize layers in approximators. The remaining standardize transform should now be used only with fixed means / stds.
  • Removes the stage argument from NNPE transform. @elseml / @vpratz / @paul-buerkner This is actually a problem for the 1% of the use cases, as we don't want the noise during inference. As I thought about it, NNPE actually fits the philosophy of an augmentation (introduced in 2.0.3) rather than a transform (i.e., something strictly needed only during training). Thus, my proposed solution is to introduce an augmentations module and start collecting these transforms there (more to come). Let me know what you think.

Copy link

codecov bot commented Jul 11, 2025

Codecov Report

Attention: Patch coverage is 92.85714% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
bayesflow/augmentations/nnpe.py 91.66% 1 Missing ⚠️
bayesflow/datasets/disk_dataset.py 0.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
bayesflow/__init__.py 100.00% <ø> (ø)
bayesflow/adapters/adapter.py 85.89% <100.00%> (-0.35%) ⬇️
bayesflow/adapters/transforms/__init__.py 100.00% <ø> (ø)
bayesflow/adapters/transforms/standardize.py 100.00% <100.00%> (+6.52%) ⬆️
bayesflow/approximators/continuous_approximator.py 90.40% <100.00%> (ø)
...low/approximators/model_comparison_approximator.py 84.39% <100.00%> (ø)
bayesflow/augmentations/__init__.py 100.00% <100.00%> (ø)
bayesflow/datasets/offline_dataset.py 82.14% <100.00%> (-0.32%) ⬇️
bayesflow/datasets/online_dataset.py 78.12% <100.00%> (-0.67%) ⬇️
bayesflow/augmentations/nnpe.py 87.50% <91.66%> (ø)
... and 1 more

@vpratz
Copy link
Collaborator

vpratz commented Jul 11, 2025

I agree that NNPE is a better fit for an augmentation than for a transform. Moving it as part of this PR (with a deprecation for the transform) sounds sensible to me.

@elseml
Copy link
Member

elseml commented Jul 11, 2025

Agree, that should fit the use case perfectly well. Could you provide a first outline of the augmentation module structure for NNPE that you have in mind (that we can then iterate on if needed)?

Copy link
Member

@elseml elseml left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything else besides the open NNPE stuff looks uncritical to me. Leaving a reminder for us to shift the NNPE tests once the augmentation is implemented.

@stefanradev93 stefanradev93 requested a review from elseml July 12, 2025 15:32
@vpratz
Copy link
Collaborator

vpratz commented Jul 13, 2025

I like the changes overall, I'm currently removing the .standardize() calls from the experimental notebooks and will push the changes when I'm done.

The signature of Adapter.standardize() is now a bit awkward in my opinion, as mean and std are required, but are not listed in neither signature nor docstring. They are only passed via kwargs. Do we want to change this, @stefanradev93?

@stefanradev93
Copy link
Contributor Author

stefanradev93 commented Jul 13, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants