Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
409 changes: 409 additions & 0 deletions bird/calibration/param_nn.py

Large diffs are not rendered by default.

22 changes: 22 additions & 0 deletions bird/calibration/scaling.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
def scale_x(inp, min_, max_):
return (inp - min_) / (max_ - min_)


def scale_par(inp, min_, max_):
return scale_x(inp, min_, max_)


def scale_y(inp, mean_, scale_):
return (inp - mean_) / (max_ - min_)


def unscale_x(inp, min_, max_):
return inp * (max_ - min_) + min_


def unscale_par(inp, min_, max_):
return unscale_x(inp, min_, max_)


def unscale_y(inp, mean_, scale_):
return inp * scale_ + mean_
1 change: 0 additions & 1 deletion bird/preprocess/dynamic_mixer/mixing_fvModels.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ def check_input(input_dict):
assert isinstance(input_dict, dict)
mix_type = []
for mix in input_dict["mixers"]:

if "x" in mix:
mix_type.append("expl")
else:
Expand Down
80 changes: 80 additions & 0 deletions papers/tutorial/calibration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Calibration tutorial

This tutorial demonstrates how to conduct the calibration with and without a surrogate forward model

## Running the tutorial
We assume that bird is installed within an environment called `bird` (see installation instruction at the root of the repo)
You will need to install additional dependencies
```
conda activate bird
cd ${BIRD_DIR}/../papers/tutorial/calibration
pip install -r requirememts.txt
```
Note that `${BIRD_DIR}` is not an environment variable, and should be replaced by the path to the root of the repo.

## Calibration Objective

The PDF of a Beta(alpha,beta) distribution is observed.
Within the context of bioreactors, one can consider this PDF to be a measured bubble size distribution (BSD)

A separate dummy numerical model is available and predicts BSD as normal distributions N(mu, sigma). The input parameter of the numerical models are mu and sigma. The objective of the modeler is to find mu and sigma such that the predicted BSD matches the observed BSD

We assume that the numerical model is expensive and instead of using the expensive numerical model for the forward model used in the calibration, one would like to use a cheaper surrogate model. Here the surrogate model is a neural network.


## Building the surrogate

The class `Param_NN` available in BiRD allows to create a parametric surrogate. Here, the input of the surrogate are the parameters `mu` and `sigma`, and the variable `x` that parameterize the PDF.

Constructing such a surrogate is not computationally useful here since the forward simulation is cheap. This is only for illustrative purposes!

Generate dataset and train neural net surrogate with `python tut_surrogate.py`.
The following plots should be generated. The first one shows the training and testing loss convergence. The second one shows examples of the surrogate accuracy on unseen data.

<p align="center">
<img src="/papers/tutorial/calibration/assets/Loss_surr.png" width="300" height="187.5"/>
<img src="/papers/tutorial/calibration/assets/test_surr.png" width="937.5" height="187.5"/>
</p>


## Calibration

For the calibration, we can use the true function or the surrogate model. The objective PDF is assumed to be noiseless. Even though the observation is noiseless, an uncertainty is computed to account for the missing physics.

Calibrate without a surrogate for `alpha=5, beta=5`: `python tut_calibration_all.py --alpha 5 --beta 5`

<p align="center">
<img src="/papers/tutorial/calibration/assets/True_a_5_b_5_prop.png" width="225" height="187.5"/>
<img src="/papers/tutorial/calibration/assets/True_a_5_b_5_corner.png" width="225" height="187.5"/>
</p>


Calibrate without a surrogate for `alpha=2, beta=5`: `python tut_calibration_all.py --alpha 2 --beta 5`


<p align="center">
<img src="/papers/tutorial/calibration/assets/True_a_2_b_5_prop.png" width="225" height="187.5"/>
<img src="/papers/tutorial/calibration/assets/True_a_2_b_5_corner.png" width="225" height="187.5"/>
</p>

Clearly, the amount of missing physics vary depending on the observations.

Then the same exercise can be done when using a neural net surrogate. Note that this step will not run if the `Building the surrogate` step was not done first

Calibrate with a surrogate for `alpha=5, beta=5`: `python tut_calibration_all.py --useNN --alpha 5 --beta 5`

<p align="center">
<img src="/papers/tutorial/calibration/assets/Surr_a_5_b_5_prop.png" width="225" height="187.5"/>
<img src="/papers/tutorial/calibration/assets/Surr_a_5_b_5_corner.png" width="225" height="187.5"/>
</p>


Calibrate with a surrogate for `alpha=2, beta=5`: `python tut_calibration_all.py --useNN --alpha 2 --beta 5`


<p align="center">
<img src="/papers/tutorial/calibration/assets/Surr_a_2_b_5_prop.png" width="225" height="187.5"/>
<img src="/papers/tutorial/calibration/assets/Surr_a_2_b_5_corner.png" width="225" height="187.5"/>
</p>

Using surrogate gives similar predictions as when not using a surrogate. But the surrogate was constructed with 200 forward simulations.
Binary file added papers/tutorial/calibration/assets/Loss_surr.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added papers/tutorial/calibration/assets/test_surr.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
102 changes: 102 additions & 0 deletions papers/tutorial/calibration/post_cal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
import corner
import jax.numpy as jnp
from prettyPlot.plotting import *


def post_process_cal(
labels_np,
labels,
np_mcmc_samples,
rangex,
forward_range,
data_x,
data_y,
sigma,
):
# Post process
ranges = []
ranges.append((0.01, 0.9))
ranges.append((0.01, 0.9))
truths = None

labels_np_disp = labels_np.copy()
ind_mu = labels_np_disp.index("mu_d")
labels_np_disp[ind_mu] = r"$\mu$"
ind_sigma = labels_np_disp.index("sigma_d")
labels_np_disp[ind_sigma] = r"$\sigma$"
fig = corner.corner(
np_mcmc_samples,
truths=truths,
labels=labels_np_disp,
truth_color="k",
bins=50,
range=ranges,
)
for ax in fig.get_axes():
ax.tick_params(
axis="both", labelsize=20
) # Customize font size, line width, and tick length
ax.xaxis.label.set_fontweight("bold") # Set the X-axis label to bold
ax.yaxis.label.set_fontweight("bold") # Set the Y-axis label to bold
ax.xaxis.label.set_font(
"Times New Roman"
) # Set the X-axis label to bold
ax.yaxis.label.set_font(
"Times New Roman"
) # Set the Y-axis label to bold
ax.xaxis.label.set_fontsize(20) # Set the X-axis label to bold
ax.yaxis.label.set_fontsize(20) # Set the Y-axis label to bold
for tick in ax.get_xticklabels() + ax.get_yticklabels():
tick.set_fontname("Times New Roman")
tick.set_fontweight("bold")
for ax in fig.get_axes():
ax.set_xlabel(ax.get_xlabel(), fontweight="bold")
ax.set_ylabel(ax.get_ylabel(), fontweight="bold")

# Convergence
fig, axes = plt.subplots(2, sharex=True)
for i in range(2):
ax = axes[i]
ax.plot(np_mcmc_samples[:, i], "k", alpha=0.3, rasterized=True)
ax.set_ylabel(labels[i])

nsamples = np_mcmc_samples.shape[0]
print("Num samples = ", nsamples)
realization = []
for i in range(nsamples):
y = forward_range(np_mcmc_samples[i, :].astype("float32"))
realization.append(y)
realization = np.array(realization)

mean_real = np.mean(realization, axis=0)
min_real = np.min(realization, axis=0)
max_real = np.max(realization, axis=0)
std97_5_real = np.percentile(realization, 97.5, axis=0)
std2_5_real = np.percentile(realization, 2.5, axis=0)

fig = plt.figure()
plt.plot(data_x, data_y, "o", color="r", markersize=7, label="Data")
plt.plot(data_x, data_y - 2 * sigma, "--", color="r")
plt.plot(
data_x,
data_y + 2 * sigma,
"--",
color="r",
label="95% Exp. confidence interval",
)

plt.plot(
rangex, mean_real, color="k", linewidth=3, label="mean degradation"
)
plt.plot(
rangex,
std97_5_real,
"--",
color="k",
linewidth=3,
label="95% Model confidence interval",
)
plt.plot(rangex, std2_5_real, "--", color="k", linewidth=3)
pretty_labels("", "", 20, title=f"Missing phys. unc. = {sigma:.2g}")

plt.show()
4 changes: 4 additions & 0 deletions papers/tutorial/calibration/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
joblib
tensorflow
scikit-learn
tf2jax
Loading