-
Notifications
You must be signed in to change notification settings - Fork 130
Add module for calibrating impact functions #692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
87 commits
Select commit
Hold shift + click to select a range
5d8278e
Initial draft for calibration from scipy.optimize
peanutfun 443545e
Draft for impact function calibration
peanutfun 819aab5
Add first unit tests of calibration module
peanutfun 96e3cb3
ci: Add bayesian-optimization during Jenkins build
peanutfun 123c632
Add __init__.py for util/calibarte/test module
peanutfun 107a836
Add climada.util.calibrate.test module to test discovery
peanutfun 2af6f09
Add unit and integration tests, update code base
peanutfun 0d6e80b
Start documenting new calibrate module
peanutfun 23cae6c
Actually add the intregration test
peanutfun 50f3fd9
Add some documentation
peanutfun d321832
commit PLEASE CLEAN UP
peanutfun 24c0fbc
Add more docstrings and simplify imports through __init__
peanutfun 096a8d4
Add separate Output classes for each optimizer
peanutfun 37c65d9
Merge branch 'develop' into calibrate-impact-functions
peanutfun e8abb1a
Restructure calibration module
peanutfun 3d94151
Add tutorial on impact function calibration
peanutfun ea0eb47
Update tutorial
peanutfun 0e5a557
Remove hazard event selection from calibrate.Input
peanutfun e1fe68a
Update calibration tutorial
peanutfun 68c421b
Merge branch 'develop' into calibrate-impact-functions
emanuel-schmid df03b0d
Update climada/util/calibrate/bayesian_optimizer.py
peanutfun 5ef4a01
Separate computing cost from transforming impact objects
peanutfun 91cfd83
Merge branch 'calibrate-impact-functions' of https://github.com/CLIMA…
peanutfun 4e1f104
Add evaluator for calibration output
peanutfun 43f40b3
Add TestBayesianOptimizer test to test loader
peanutfun 97d763a
Update code, docs, and tutorial
peanutfun d43eb8a
Update tutorial
peanutfun dda079d
Add option to adjust data frame alignment
peanutfun 185866f
Merge branch 'develop' into calibrate-impact-functions
peanutfun 5fdbf4e
Merge branch 'develop' into calibrate-impact-functions
peanutfun c2ede47
Merge branch 'develop' into calibrate-impact-functions
peanutfun c40b85a
Merge branch 'develop' into calibrate-impact-functions
peanutfun 645862a
Merge branch 'develop' into calibrate-impact-functions
peanutfun 357541a
Merge branch 'develop' into calibrate-impact-functions
emanuel-schmid 2e536ef
add seaborn
emanuel-schmid 2ace55f
Add function to plot Impf variability of calibration (#791)
timschmi95 423b5b8
Improve alignment and handling of NaNs
peanutfun 8349016
Add seaborn to dependencies
peanutfun 67ef797
Merge branch 'calibrate-impact-functions' of https://github.com/CLIMA…
peanutfun 24c1fc3
Split tests into multiple files, finish up
peanutfun 1538e78
Move impact transform and align to Input
peanutfun 832da6a
Use MultiIndex in parameter space dataframe
peanutfun af959a7
Update tutorial
peanutfun 042e6c9
Fix requirements for calibration module
peanutfun 677fba9
Remove plot_impf_set function and improve exception type
peanutfun 3a046c7
Add tests for OutputEvaluator
peanutfun 066afe5
Fix name of bayes_opt package on PyPI
peanutfun fbc1701
Make sure latest seaborn is installed on Jenkins
peanutfun 2986b51
Remove unused function definition
peanutfun 8ab8600
Fix linter issues and remove unused code
peanutfun 85a5826
Add BayesianOptimizerOutputEvaluator
peanutfun 472d0c5
Fix typo in tutorial
peanutfun 6ef09e5
Add GNU license header to new files
peanutfun 661f991
Update CHANGELOG.md
peanutfun 90f2749
edit authors.md
53feef6
Fix a bug in parameter space plot
peanutfun 6fa9315
Merge branch 'develop' into calibrate-impact-functions
peanutfun d4d6777
Add suggestions from code review
peanutfun a0bada5
Add iteration controller for BayesianOptimizer
peanutfun cab68c4
Fix calibrate module init and add first controller tests
peanutfun a9c45d7
Merge branch 'develop' into calibrate-impact-functions
peanutfun 9bf050c
Fix handling of instance maximum for constrained optimization
peanutfun 17046db
Merge branch 'develop' into calibrate-impact-functions
peanutfun 72f6263
Add tests for BayesianOptimizerController and fix verbosity
peanutfun 5a8ac9f
Add test for plotting parameter space
peanutfun 01f0f1c
Update integration tests for BayesianOptimizer
peanutfun b8f1cac
Add explanation of BayesianOptimizerController to tutorial
peanutfun 10e9679
Merge branch 'develop' into calibrate-impact-functions
emanuel-schmid 2326db2
Add option to store and load calibration results
peanutfun e270e1a
Merge branch 'develop' into calibrate-impact-functions
peanutfun 5e2dd75
Update plots and tutorial
peanutfun 10e0014
Add JOSS paper and associated GitHub workflow (#876)
peanutfun 1c412da
JOSS Paper: Fix typo
peanutfun 2b1ecda
JOSS Paper: Fix DOIs
peanutfun e7e441a
JOSS Paper: Do not call it 'natural disaster'.
peanutfun 60b3a6e
JOSS: Add missing URL for Rougier et al.
peanutfun ba9bb7f
JOSS: Remove unused reference
peanutfun 1d3d014
Add overview section to tutorial and include review suggestions
peanutfun 9949f3e
Add quickstart section to tutorial
peanutfun a446edd
Shorten quickstart
peanutfun 07adc6f
Guide readers through quickstart section
peanutfun 1f14859
Replace Riedel et al. 2024 preprint with publication
peanutfun 25fb7c3
Merge branch 'develop' into calibrate-impact-functions
peanutfun 0183307
ci: Remove JOSS paper build job
peanutfun 1511cc1
Fix CHANGELOG.md and add entry in Citation guide
peanutfun 102a335
Merge branch 'develop' into calibrate-impact-functions
peanutfun 2232c8c
Revert changes to Jenkinsfile
peanutfun File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,261 @@ | ||
""" | ||
This file is part of CLIMADA. | ||
|
||
Copyright (C) 2017 ETH Zurich, CLIMADA contributors listed in AUTHORS. | ||
|
||
CLIMADA is free software: you can redistribute it and/or modify it under the | ||
terms of the GNU General Public License as published by the Free | ||
Software Foundation, version 3. | ||
|
||
CLIMADA is distributed in the hope that it will be useful, but WITHOUT ANY | ||
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A | ||
PARTICULAR PURPOSE. See the GNU General Public License for more details. | ||
|
||
You should have received a copy of the GNU General Public License along | ||
with CLIMADA. If not, see <https://www.gnu.org/licenses/>. | ||
|
||
--- | ||
Integration tests for calibration module | ||
""" | ||
|
||
import unittest | ||
|
||
import pandas as pd | ||
import numpy as np | ||
import numpy.testing as npt | ||
from scipy.optimize import NonlinearConstraint | ||
from sklearn.metrics import mean_squared_error | ||
from matplotlib.axes import Axes | ||
|
||
from climada.entity import ImpactFuncSet, ImpactFunc | ||
|
||
from climada.util.calibrate import ( | ||
Input, | ||
ScipyMinimizeOptimizer, | ||
BayesianOptimizer, | ||
OutputEvaluator, | ||
BayesianOptimizerOutputEvaluator, | ||
BayesianOptimizerController, | ||
) | ||
|
||
from climada.util.calibrate.test.test_base import hazard, exposure | ||
|
||
|
||
class TestScipyMinimizeOptimizer(unittest.TestCase): | ||
"""Test the TestScipyMinimizeOptimizer""" | ||
|
||
def setUp(self) -> None: | ||
"""Prepare input for optimization""" | ||
self.hazard = hazard() | ||
self.hazard.frequency = np.ones_like(self.hazard.event_id) | ||
self.hazard.date = self.hazard.frequency | ||
self.hazard.event_name = ["event"] * len(self.hazard.event_id) | ||
self.exposure = exposure() | ||
self.events = [10, 1] | ||
self.hazard = self.hazard.select(event_id=self.events) | ||
self.data = pd.DataFrame( | ||
data={"a": [3, 1], "b": [0.2, 0.01]}, index=self.events | ||
) | ||
self.impact_to_dataframe = lambda impact: impact.impact_at_reg(["a", "b"]) | ||
self.impact_func_creator = lambda slope: ImpactFuncSet( | ||
[ | ||
ImpactFunc( | ||
intensity=np.array([0, 10]), | ||
mdd=np.array([0, 10 * slope]), | ||
paa=np.ones(2), | ||
id=1, | ||
haz_type="TEST", | ||
) | ||
] | ||
) | ||
self.input = Input( | ||
self.hazard, | ||
self.exposure, | ||
self.data, | ||
self.impact_func_creator, | ||
self.impact_to_dataframe, | ||
mean_squared_error, | ||
) | ||
|
||
def test_single(self): | ||
"""Test with single parameter""" | ||
optimizer = ScipyMinimizeOptimizer(self.input) | ||
output = optimizer.run(params_init={"slope": 0.1}) | ||
|
||
# Result should be nearly exact | ||
self.assertTrue(output.result.success) | ||
self.assertAlmostEqual(output.params["slope"], 1.0) | ||
self.assertAlmostEqual(output.target, 0.0) | ||
|
||
def test_bound(self): | ||
"""Test with single bound""" | ||
self.input.bounds = {"slope": (-1.0, 0.91)} | ||
optimizer = ScipyMinimizeOptimizer(self.input) | ||
output = optimizer.run(params_init={"slope": 0.1}) | ||
|
||
# Result should be very close to the bound | ||
self.assertTrue(output.result.success) | ||
self.assertGreater(output.params["slope"], 0.89) | ||
self.assertAlmostEqual(output.params["slope"], 0.91, places=2) | ||
|
||
def test_multiple_constrained(self): | ||
"""Test with multiple constrained parameters""" | ||
# Set new generator | ||
self.input.impact_func_creator = lambda intensity_1, intensity_2: ImpactFuncSet( | ||
[ | ||
ImpactFunc( | ||
intensity=np.array([0, intensity_1, intensity_2]), | ||
mdd=np.array([0, 1, 3]), | ||
paa=np.ones(3), | ||
id=1, | ||
haz_type="TEST", | ||
) | ||
] | ||
) | ||
|
||
# Constraint: param[0] < param[1] (intensity_1 < intensity_2) | ||
self.input.constraints = NonlinearConstraint( | ||
lambda params: params[0] - params[1], -np.inf, 0.0 | ||
) | ||
self.input.bounds = {"intensity_1": (0, 3.1), "intensity_2": (0, 3.1)} | ||
|
||
# Run optimizer | ||
optimizer = ScipyMinimizeOptimizer(self.input) | ||
output = optimizer.run( | ||
params_init={"intensity_1": 2, "intensity_2": 2}, | ||
options=dict(gtol=1e-5, xtol=1e-5), | ||
) | ||
|
||
# Check results (low accuracy) | ||
self.assertTrue(output.result.success) | ||
print(output.result.message) | ||
print(output.result.status) | ||
self.assertAlmostEqual(output.params["intensity_1"], 1.0, places=2) | ||
self.assertGreater(output.params["intensity_2"], 2.8) # Should be 3.0 | ||
self.assertAlmostEqual(output.target, 0.0, places=3) | ||
|
||
|
||
class TestBayesianOptimizer(unittest.TestCase): | ||
"""Integration tests for the BayesianOptimizer""" | ||
|
||
def setUp(self) -> None: | ||
"""Prepare input for optimization""" | ||
self.hazard = hazard() | ||
self.hazard.frequency = np.ones_like(self.hazard.event_id) | ||
self.hazard.date = self.hazard.frequency | ||
self.hazard.event_name = ["event"] * len(self.hazard.event_id) | ||
self.exposure = exposure() | ||
self.events = [10, 1] | ||
self.hazard = self.hazard.select(event_id=self.events) | ||
self.data = pd.DataFrame( | ||
data={"a": [3, 1], "b": [0.2, 0.01]}, index=self.events | ||
) | ||
self.impact_to_dataframe = lambda impact: impact.impact_at_reg(["a", "b"]) | ||
self.impact_func_creator = lambda slope: ImpactFuncSet( | ||
[ | ||
ImpactFunc( | ||
intensity=np.array([0, 10]), | ||
mdd=np.array([0, 10 * slope]), | ||
paa=np.ones(2), | ||
id=1, | ||
haz_type="TEST", | ||
) | ||
] | ||
) | ||
self.input = Input( | ||
self.hazard, | ||
self.exposure, | ||
self.data, | ||
self.impact_func_creator, | ||
self.impact_to_dataframe, | ||
mean_squared_error, | ||
) | ||
|
||
def test_single(self): | ||
"""Test with single parameter""" | ||
self.input.bounds = {"slope": (-1, 3)} | ||
controller = BayesianOptimizerController( | ||
init_points=10, n_iter=20, max_iterations=1 | ||
) | ||
optimizer = BayesianOptimizer(self.input, random_state=1) | ||
output = optimizer.run(controller) | ||
|
||
# Check result (low accuracy) | ||
self.assertAlmostEqual(output.params["slope"], 1.0, places=2) | ||
self.assertAlmostEqual(output.target, 0.0, places=3) | ||
self.assertEqual(output.p_space.dim, 1) | ||
self.assertTupleEqual(output.p_space_to_dataframe().shape, (30, 2)) | ||
self.assertEqual(controller.iterations, 1) | ||
|
||
def test_multiple_constrained(self): | ||
"""Test with multiple constrained parameters""" | ||
# Set new generator | ||
self.input.impact_func_creator = lambda intensity_1, intensity_2: ImpactFuncSet( | ||
[ | ||
ImpactFunc( | ||
intensity=np.array([0, intensity_1, intensity_2]), | ||
mdd=np.array([0, 1, 3]), | ||
paa=np.ones(3), | ||
id=1, | ||
haz_type="TEST", | ||
) | ||
] | ||
) | ||
|
||
# Constraint: param[0] < param[1] (intensity_1 < intensity_2) | ||
self.input.constraints = NonlinearConstraint( | ||
lambda intensity_1, intensity_2: intensity_1 - intensity_2, -np.inf, 0.0 | ||
) | ||
self.input.bounds = {"intensity_1": (-1, 4), "intensity_2": (-1, 4)} | ||
# Run optimizer | ||
optimizer = BayesianOptimizer(self.input, random_state=1) | ||
controller = BayesianOptimizerController.from_input( | ||
self.input, sampling_base=5, max_iterations=3 | ||
) | ||
output = optimizer.run(controller) | ||
|
||
# Check results (low accuracy) | ||
self.assertEqual(output.p_space.dim, 2) | ||
self.assertAlmostEqual(output.params["intensity_1"], 1.0, places=2) | ||
self.assertAlmostEqual(output.params["intensity_2"], 3.0, places=1) | ||
self.assertAlmostEqual(output.target, 0.0, places=3) | ||
self.assertGreater(controller.iterations, 1) | ||
|
||
# Check constraints in parameter space | ||
p_space = output.p_space_to_dataframe() | ||
self.assertSetEqual( | ||
set(p_space.columns.to_list()), | ||
{ | ||
("Parameters", "intensity_1"), | ||
("Parameters", "intensity_2"), | ||
("Calibration", "Cost Function"), | ||
("Calibration", "Constraints Function"), | ||
("Calibration", "Allowed"), | ||
}, | ||
) | ||
self.assertGreater(p_space.shape[0], 50) # Two times random iterations | ||
self.assertEqual(p_space.shape[1], 5) | ||
p_allowed = p_space.loc[p_space["Calibration", "Allowed"], "Parameters"] | ||
npt.assert_array_equal( | ||
(p_allowed["intensity_1"] < p_allowed["intensity_2"]).to_numpy(), | ||
np.full_like(p_allowed["intensity_1"].to_numpy(), True), | ||
) | ||
|
||
def test_plots(self): | ||
"""Check if executing the default plots works""" | ||
self.input.bounds = {"slope": (-1, 3)} | ||
optimizer = BayesianOptimizer(self.input, random_state=1) | ||
controller = BayesianOptimizerController.from_input( | ||
self.input, max_iterations=1 | ||
) | ||
output = optimizer.run(controller) | ||
|
||
output_eval = OutputEvaluator(self.input, output) | ||
output_eval.impf_set.plot() | ||
output_eval.plot_at_event() | ||
output_eval.plot_at_region() | ||
output_eval.plot_event_region_heatmap() | ||
|
||
output_eval = BayesianOptimizerOutputEvaluator(self.input, output) | ||
ax = output_eval.plot_impf_variability() | ||
self.assertIsInstance(ax, Axes) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
""" | ||
This file is part of CLIMADA. | ||
|
||
Copyright (C) 2017 ETH Zurich, CLIMADA contributors listed in AUTHORS. | ||
|
||
CLIMADA is free software: you can redistribute it and/or modify it under the | ||
terms of the GNU General Public License as published by the Free | ||
Software Foundation, version 3. | ||
|
||
CLIMADA is distributed in the hope that it will be useful, but WITHOUT ANY | ||
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A | ||
PARTICULAR PURPOSE. See the GNU General Public License for more details. | ||
|
||
You should have received a copy of the GNU General Public License along | ||
with CLIMADA. If not, see <https://www.gnu.org/licenses/>. | ||
|
||
--- | ||
Impact function calibration module | ||
""" | ||
|
||
from .base import Input, OutputEvaluator | ||
from .bayesian_optimizer import ( | ||
BayesianOptimizer, | ||
BayesianOptimizerController, | ||
BayesianOptimizerOutput, | ||
BayesianOptimizerOutputEvaluator, | ||
select_best | ||
) | ||
from .scipy_optimizer import ScipyMinimizeOptimizer |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.