Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
140 commits
Select commit Hold shift + click to select a range
b2155dd
Adding sample RF space for tabular collection design
Neeratyoy Jun 23, 2021
ce405e6
Placeholder SVM benchmark to interface tabular data collection
Neeratyoy Jun 23, 2021
2ef3af8
Writing common ML benchmark class for tabular collection
Neeratyoy Jun 24, 2021
61b6963
Adding placeholder for HistGradientBoostedClassifier
Neeratyoy Jun 24, 2021
a5d0217
Minor code cleaning
Neeratyoy Jun 24, 2021
3def203
Reformatting output dict + option to add more metrics
Neeratyoy Jun 26, 2021
750cc7d
Removing redundant import
Neeratyoy Jun 28, 2021
e7665e6
Decoupling storage of costs for each metric
Neeratyoy Jun 30, 2021
47fe4cd
Including test scores in objective
Neeratyoy Jul 1, 2021
2d085ec
Documenting the structure of information in each fn eval.
Neeratyoy Jul 1, 2021
2da9d5c
Some decisions on lower bound for subsample fidelity
Neeratyoy Jul 2, 2021
751d2e9
AbstractBenchmark update for fidelity option + including XGBoost
Neeratyoy Jul 6, 2021
3f84afb
Adding sample RF space for tabular collection design
Neeratyoy Jun 23, 2021
09b296a
Placeholder SVM benchmark to interface tabular data collection
Neeratyoy Jun 23, 2021
af4f593
Writing common ML benchmark class for tabular collection
Neeratyoy Jun 24, 2021
df2462d
Adding placeholder for HistGradientBoostedClassifier
Neeratyoy Jun 24, 2021
4d1d2d6
Minor code cleaning
Neeratyoy Jun 24, 2021
299e592
Reformatting output dict + option to add more metrics
Neeratyoy Jun 26, 2021
c46321d
Removing redundant import
Neeratyoy Jun 28, 2021
17f6634
Decoupling storage of costs for each metric
Neeratyoy Jun 30, 2021
7de891f
Including test scores in objective
Neeratyoy Jul 1, 2021
ec316c3
Documenting the structure of information in each fn eval.
Neeratyoy Jul 1, 2021
e7f69b9
Some decisions on lower bound for subsample fidelity
Neeratyoy Jul 2, 2021
edb3e7f
AbstractBenchmark update for fidelity option + including XGBoost
Neeratyoy Jul 6, 2021
642027b
Merge branch 'thesis-paper' of https://github.com/Neeratyoy/HPOBench …
Neeratyoy Jul 7, 2021
9e907e6
Option to load data splits from disk
Neeratyoy Jul 8, 2021
f0d4f36
Reordering data load to work for different cases
Neeratyoy Jul 12, 2021
dbeae7c
Updating source of SVM HP range
Neeratyoy Jul 14, 2021
f277a2e
Adding Tabular Benchmark class
Neeratyoy Jul 14, 2021
60d5646
Adding TabularBenchmark interface + easy import
Neeratyoy Jul 15, 2021
c4100fd
Adding LR space
Neeratyoy Jul 16, 2021
9c6dcdb
Standardizing fidelity space definitions
Neeratyoy Jul 19, 2021
74b6919
Standardizing HPs + Adding NN space
Neeratyoy Jul 19, 2021
785055e
Small placeholder for testing
Neeratyoy Jul 19, 2021
0159a35
Updating NN HP space + Helper function for TabularBenchmark
Neeratyoy Jul 20, 2021
e9e097a
Adding fidelity range retrieval utility to TabularBenchmark
Neeratyoy Jul 20, 2021
4797109
Enforcing subsample lower bound check inside objective
Neeratyoy Jul 21, 2021
dbb7327
Bug fix + adding precicion as metric
Neeratyoy Jul 21, 2021
7d5ca57
Fixing param spaces and model building for LR, SVM
Neeratyoy Jul 22, 2021
a6d94bb
TabularBenchmark edit to read compressed files and query a dataframe
Neeratyoy Jul 26, 2021
93b6908
Not evaluating training set to save time
Neeratyoy Jul 27, 2021
8164eb0
Fidelity change for trees + NN space change
Neeratyoy Jul 27, 2021
6916c9c
Final RF space
Neeratyoy Jul 29, 2021
8e5912b
Final XGB space
Neeratyoy Jul 29, 2021
6968ac3
Final HistGB space
Neeratyoy Jul 30, 2021
79dd1f3
Finalizing RF, XGB, NN
Neeratyoy Aug 2, 2021
ca1e0d4
TabularBenchmark edit to process only table and metadata
Neeratyoy Aug 2, 2021
87133ed
Merge remote-tracking branch 'origin/development' into PR_Multi-fidel…
PhMueller Aug 4, 2021
6096204
Merge remote-tracking branch 'origin/development' into PR_Multi-fidel…
PhMueller Aug 4, 2021
0d70d36
TabularBenchmark
PhMueller Aug 11, 2021
12ebce8
Pycodestyle
PhMueller Aug 11, 2021
873781e
Flake8
PhMueller Aug 11, 2021
532a905
Adapt ML Benchmark Template to fit with current API
PhMueller Aug 11, 2021
9dbd61c
Corret Datamanager.
PhMueller Aug 11, 2021
0304146
Finalize HistGB Benchmarks
PhMueller Aug 11, 2021
3e95d19
Write OpenML Datamanager
PhMueller Aug 16, 2021
f3fbd58
Unify interface for the other ml benchmarks.
PhMueller Aug 16, 2021
e57fbcb
Flake + Pep
PhMueller Aug 16, 2021
f6131ea
Add Container Interface
PhMueller Aug 16, 2021
36bc391
Mark `task_id` as required.
PhMueller Aug 16, 2021
a5c7d62
Adapt Interfaces
PhMueller Aug 16, 2021
c5f6979
Fix minor errors.
PhMueller Aug 16, 2021
48af58d
Fix minor errors.
PhMueller Aug 16, 2021
cf24488
Pylint
PhMueller Aug 16, 2021
528dde1
Init Model can handle now Configurations
PhMueller Aug 17, 2021
6bdf5c0
PR Requests: Rename Classes
PhMueller Aug 17, 2021
b8b30a5
PR Requests: Move dependencies to correct directory
PhMueller Aug 17, 2021
875c594
PR Requests: Tabular Benchmarks - Remove unnecessary class definition
PhMueller Aug 17, 2021
8891e33
PR Requests: Minor improvments
PhMueller Aug 17, 2021
75f345d
PR Requests: Update upper bounds of the fidelities
PhMueller Aug 17, 2021
8c2ab6c
PR Requests: Remove OriginalTabBenchmarks
PhMueller Aug 17, 2021
e24d537
PR Requests: Revert the query function
PhMueller Aug 17, 2021
3c4f375
PR Requests: Minor improvements
PhMueller Aug 17, 2021
6fc7f57
Pycodestyle
PhMueller Aug 17, 2021
0430c68
Add missing requirements
PhMueller Aug 17, 2021
3eb3a2d
Minor Improvements
PhMueller Aug 17, 2021
fa691f7
ADD container recipes
PhMueller Aug 17, 2021
f64917e
PR: Fix path in tabular data loader
PhMueller Aug 19, 2021
b95d2a5
PR: Remove casting configspace to np.floats
PhMueller Aug 19, 2021
d7d7a2d
PR: Move everything back from ml_mmfb/ to ml/
PhMueller Aug 19, 2021
be641f8
PR: Remove pybnn from the init.
PhMueller Aug 19, 2021
7bc25bc
PR: Cleanup
PhMueller Aug 19, 2021
b0d9b7f
PR: Fix Tests
PhMueller Aug 19, 2021
59bd905
Adding public URLs for tabular benchmark
Neeratyoy Aug 19, 2021
6576e99
Merge branch 'PR_Multi-fidelity-tabular-benchmarks' into add-tabular-…
Neeratyoy Aug 19, 2021
f576fb3
Adding more models
Neeratyoy Aug 19, 2021
63f5177
Updating figshare URLs with new public ones
Neeratyoy Aug 20, 2021
5335831
PR Fix URLs and dependencies
PhMueller Aug 20, 2021
cf9b4ef
Updating URL for SVM data
Neeratyoy Aug 21, 2021
ed7d23e
Updating Tabular bench URLs
Neeratyoy Aug 23, 2021
9181bbb
PR Fix URLs and dependencies
PhMueller Aug 25, 2021
451ff08
PR Fix URLs and dependencies
PhMueller Aug 25, 2021
310b11e
Updating RF benchmark URL
Neeratyoy Aug 26, 2021
f01286b
Updating XGB URL
Neeratyoy Aug 26, 2021
12b72b1
PR Fix tests
PhMueller Aug 27, 2021
41aa96b
New Urls
PhMueller Aug 27, 2021
c23e354
Trigger Rebuild.
PhMueller Aug 30, 2021
1fa684c
Fix Dataloader Assertion
PhMueller Aug 30, 2021
5f015a6
Merge branch 'PR_Multi-fidelity-tabular-benchmarks' into add-tabular-…
Neeratyoy Aug 31, 2021
11c57bf
Merge branch 'development' into add-tabular-urls
Neeratyoy Sep 1, 2021
bfb3876
Redesigning reporting of val-test evaluations on query type
Neeratyoy Oct 1, 2021
8d7ea97
Merge branch 'development' into add-tabular-urls
Neeratyoy Oct 6, 2021
6394521
inference cost key fix
Neeratyoy Oct 6, 2021
9776824
Merge branch 'development' into tabular_v2
Neeratyoy Oct 18, 2021
460ae8c
Basic redesign of data collected on raw objective
Neeratyoy Oct 18, 2021
381eeaf
Merge branch 'add-tabular-urls' into tabular_v2
Neeratyoy Nov 10, 2021
cdd222f
Restructuring first iteration
Neeratyoy Nov 10, 2021
e1bb210
Updating ML benches version with revisions
Neeratyoy Nov 11, 2021
815da34
Minor update
Neeratyoy Nov 11, 2021
462549c
Adding model size info LR SVM NN
Neeratyoy Nov 15, 2021
e6e77a3
Adding model size metric for RF, XGB
Neeratyoy Nov 15, 2021
b79bc2d
Enforcing minor PEP constraints
Neeratyoy Nov 15, 2021
2f97931
Minor fixes to MLP and LR
Neeratyoy Nov 16, 2021
aeebf8f
Recording LCs for LR, RF, MLP
Neeratyoy Dec 19, 2021
48a88dd
Updating LR benchmark for LC collection
Neeratyoy Jan 3, 2022
e4c699f
Updating RF bench with LC collection
Neeratyoy Jan 3, 2022
a8f551f
Updating MLP bench with LC collection
Neeratyoy Jan 3, 2022
0a47c59
Adding minor check
Neeratyoy Jan 4, 2022
20546a7
Cleaning up seed usage in ML classes
Neeratyoy Jan 5, 2022
bfc42cc
Cleaning LC collection code for ML benchmarks
Neeratyoy Jan 7, 2022
15e8e1b
Merge branch 'tabular_v2' into LCs
Neeratyoy Jan 7, 2022
9734da1
Making LCs branch primary for experiments
Neeratyoy Jan 7, 2022
0b4dc1f
Adding option to record LC every k iterations
Neeratyoy Jan 9, 2022
46addaa
Updating LR to collect LC every k iterations
Neeratyoy Jan 9, 2022
e2033df
Updating RF to collect LC every k iterations
Neeratyoy Jan 9, 2022
67a3a32
Adding LC collection option to MLP
Neeratyoy Feb 6, 2022
1544e1f
Updating tabular benchmark data for ML benchmarks
Neeratyoy May 30, 2022
225a82b
version update for tabular benchmark
Neeratyoy May 30, 2022
fd25d57
Updating container information for ML benches
Neeratyoy Jun 2, 2022
0ad578f
Bug fix for containers and fidelity space
Neeratyoy Jun 3, 2022
dfa2034
Updating container version used for experiments
Neeratyoy Aug 31, 2022
5889a04
HPOBench with python >= 3.10 (#162)
PhMueller Sep 28, 2022
45b1eb0
Speed up CI testing (#163)
PhMueller Sep 28, 2022
3b2fa7b
Refactor PR (#164)
PhMueller Dec 15, 2022
ebff935
Nas201 - v0.0.6 (#165)
PhMueller Dec 15, 2022
4414e3d
Nas101 + Nas1shot1 v0.0.5 - Multi Objective (#166)
PhMueller Dec 15, 2022
47f8b5b
MO CNN and MO Adult v0.0.2 (#167)
PhMueller Dec 15, 2022
46f719d
Towards compatibility with windows os. (#170)
PhMueller Feb 22, 2023
236e542
Raw YAHPO Benchmarks (#153)
PhMueller Feb 22, 2023
36acf0d
Merge branch 'development' into LCs
KEggensperger Mar 15, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 15 additions & 9 deletions .github/workflows/run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

name: Test Pull Requests

on: [push, pull_request]
on: [push]

jobs:
Tests:
Expand All @@ -11,36 +11,42 @@ jobs:
strategy:
matrix:
include:
- python-version: 3.7
- python-version: "3.7"
DISPLAY_NAME: "Singularity Tests + CODECOV"
RUN_TESTS: true
USE_SINGULARITY: true
SINGULARITY_VERSION: "3.8"
RUN_CODECOV: true

- python-version: 3.7
- python-version: "3.7"
DISPLAY_NAME: "Codestyle"
RUN_CODESTYLE: true
USE_SINGULARITY: false

- python-version: 3.7
- python-version: "3.7"
DISPLAY_NAME: "Singularity Container Examples"
RUN_CONTAINER_EXAMPLES: true
USE_SINGULARITY: true
SINGULARITY_VERSION: "3.8"

- python-version: 3.7
- python-version: "3.7"
DISPLAY_NAME: "Local Examples"
RUN_LOCAL_EXAMPLES: true
USE_SINGULARITY: false

- python-version: 3.8
- python-version: "3.8"
DISPLAY_NAME: "Singularity Tests"
RUN_TESTS: true
USE_SINGULARITY: true
SINGULARITY_VERSION: "3.8"

- python-version: 3.9
- python-version: "3.9"
DISPLAY_NAME: "Singularity Tests"
RUN_TESTS: true
USE_SINGULARITY: true
SINGULARITY_VERSION: "3.8"

- python-version: "3.10"
DISPLAY_NAME: "Singularity Tests"
RUN_TESTS: true
USE_SINGULARITY: true
Expand All @@ -63,7 +69,7 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
python-version: "${{ matrix.python-version }}"
- name: Set up Go for Singularity
if: matrix.USE_SINGULARITY == true
uses: actions/setup-go@v2
Expand All @@ -78,4 +84,4 @@ jobs:
python -m pip install --upgrade pip
chmod +x ci_scripts/install.sh && source ./ci_scripts/install.sh
- name: Run Tests
run: chmod +x ci_scripts/script.sh && source ./ci_scripts/script.sh
run: chmod +x ci_scripts/script.sh && source ./ci_scripts/script.sh
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,3 +149,19 @@ See whether in `~/.singularity/instances/sing/$HOSTNAME/*/` there is a file that

**Note:** If you are looking for a different or older version of our benchmarking library, you might be looking for
[HPOlib1.5](https://github.com/automl/HPOlib1.5)

## Reference

If you use HPOBench, please cite the following paper:

```bibtex
@inproceedings{
eggensperger2021hpobench,
title={{HPOB}ench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for {HPO}},
author={Katharina Eggensperger and Philipp M{\"u}ller and Neeratyoy Mallik and Matthias Feurer and Rene Sass and Aaron Klein and Noor Awad and Marius Lindauer and Frank Hutter},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=1k4rJYEwda-}
}
```

33 changes: 26 additions & 7 deletions ci_scripts/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,24 @@ install_packages=""

if [[ "$RUN_TESTS" == "true" ]]; then
echo "Install tools for testing"
install_packages="${install_packages}xgboost,pytest,test_paramnet,test_tabular_datamanager,"
install_packages="${install_packages}pytest,test_tabular_datamanager,"
pip install codecov

# The param net benchmark does not work with a scikit-learn version != 0.23.2. (See notes in the benchmark)
# To make sure that no newer version is installed, we install it before the other requirements.
# Since we are not using a "--upgrade" option later on, pip skips to install another scikit-learn version.
echo "Install the right scikit-learn function for the param net tests."
pip install --upgrade scikit-learn==0.23.2
PYVERSION=$(python -V 2>&1 | sed 's/.* \([0-9]\).\([0-9]*\).*/\1\2/')
if [[ "${PYVERSION}" != "310" ]]; then
# The param net benchmark does not work with a scikit-learn version != 0.23.2. (See notes in the benchmark)
# To make sure that no newer version is installed, we install it before the other requirements.
# Since we are not using a "--upgrade" option later on, pip skips to install another scikit-learn version.
echo "Install the right scikit-learn function for the param net tests."
pip install --upgrade scikit-learn==0.23.2
install_packages="${install_packages}xgboost,test_paramnet,"
else
echo "Skip installing the extra paramnet tests."
# For 3.10, we need a different pandas version - this comes as a requirement for the old xgboost benchmark.
# building pandas<=1.5.0 does not work with 3.10 anymore. -> install a different version.
install_packages="${install_packages}xgboost_310,"
fi

else
echo "Skip installing tools for testing"
fi
Expand All @@ -35,7 +45,16 @@ if [[ "$RUN_LOCAL_EXAMPLES" == "true" ]]; then
echo "Install packages for local examples"
echo "Install swig"
sudo apt-get update && sudo apt-get install -y build-essential swig
install_packages="${install_packages}xgboost,"

PYVERSION=$(python -V 2>&1 | sed 's/.* \([0-9]\).\([0-9]*\).*/\1\2/')
if [[ "${PYVERSION}" != "310" ]]; then
# For 3.10, we need a different pandas version - this comes as a requirement for the old xgboost benchmark.
# building pandas<=1.5.0 does not work with 3.10 anymore. -> install a different version.
install_packages="${install_packages}xgboost,"
else
install_packages="${install_packages}xgboost_310,"
fi

else
echo "Skip installing packages for local examples"
fi
Expand Down
2 changes: 1 addition & 1 deletion extra_requirements/nasbench_1shot1.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"nasbench_1shot1": ["tensorflow==1.15.0","matplotlib","seaborn", "networkx", "tqdm"]
"nasbench_1shot1": ["protobuf==3.20.1", "tensorflow==1.15.0", "matplotlib", "seaborn", "networkx", "tqdm"]
}
2 changes: 1 addition & 1 deletion extra_requirements/tests.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
"codestyle": ["pycodestyle","flake8","pylint"],
"pytest": ["pytest>=4.6","pytest-cov"],
"test_paramnet": ["tqdm", "scikit-learn==0.23.2"],
"test_tabular_datamanager": ["pyarrow", "fastparquet"]
"test_tabular_datamanager": ["tqdm","pyarrow", "fastparquet"]
}
3 changes: 2 additions & 1 deletion extra_requirements/xgboost.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
{
"xgboost": ["xgboost==0.90","pandas>=1.0.0,<1.1.5","openml==0.10.2","scikit-learn>=0.18.1"]
"xgboost": ["xgboost==0.90","pandas>=1.0.0,<1.1.5","openml==0.10.2","scikit-learn>=0.18.1"],
"xgboost_310": ["xgboost","pandas","openml==0.10.2","scikit-learn>=0.18.1"]
}
3 changes: 2 additions & 1 deletion extra_requirements/yahpo_gym.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
{
"yahpo_gym": ["yahpo_gym@git+https://github.com/pfistfl/yahpo_gym#egg=yahpo_gym&subdirectory=yahpo_gym"]
"yahpo_gym": ["yahpo_gym@git+https://github.com/pfistfl/yahpo_gym#egg=yahpo_gym&subdirectory=yahpo_gym"],
"yahpo_gym_raw": ["yahpo_gym@git+https://github.com/pfistfl/yahpo_gym#egg=yahpo_gym&subdirectory=yahpo_gym", "rpy2>=3.5.0", "openml==0.10.2", "gitpython>=3.1"]
}
105 changes: 32 additions & 73 deletions hpobench/abstract_benchmark.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
""" Base-class of all benchmarks """

import abc
from typing import Union, Dict, List, Tuple
import functools

import logging
from typing import Union, Dict, List, Tuple

import ConfigSpace
import numpy as np

from ConfigSpace.util import deactivate_inactive_hyperparameters

from hpobench.util import rng_helper

logger = logging.getLogger('AbstractBenchmark')


class AbstractBenchmark(abc.ABC, metaclass=abc.ABCMeta):
class _BaseAbstractBenchmark(abc.ABC, metaclass=abc.ABCMeta):

def __init__(self, rng: Union[int, np.random.RandomState, None] = None, **kwargs):
"""
Expand All @@ -34,7 +34,7 @@ def __init__(self, rng: Union[int, np.random.RandomState, None] = None, **kwargs
np.random.RandomState with seed `rng` is created. If type is None,
create a new random state.
"""

super(_BaseAbstractBenchmark, self).__init__(**kwargs)
self.rng = rng_helper.get_rng(rng=rng)
self.configuration_space = self.get_configuration_space(self.rng.randint(0, 10000))
self.fidelity_space = self.get_fidelity_space(self.rng.randint(0, 10000))
Expand Down Expand Up @@ -210,20 +210,14 @@ def _check_and_cast_fidelity(fidelity: Union[dict, ConfigSpace.Configuration, No
fidelity_space.check_configuration(fidelity)
return fidelity

@staticmethod
def _check_return_values(return_values: Dict) -> Dict:
"""
The return values should contain the fields `function_value` and `cost`.
"""
assert 'function_value' in return_values.keys()
assert 'cost' in return_values.keys()

return return_values

def __call__(self, configuration: Dict, **kwargs) -> float:
""" Provides interface to use, e.g., SciPy optimizers """
return self.objective_function(configuration, **kwargs)['function_value']

@staticmethod
def _check_return_values(return_values: Dict) -> Dict:
raise NotImplementedError()

@staticmethod
@abc.abstractmethod
def get_configuration_space(seed: Union[int, None] = None) -> ConfigSpace.ConfigurationSpace:
Expand Down Expand Up @@ -269,74 +263,39 @@ def get_meta_information() -> Dict:
raise NotImplementedError()


class AbstractMultiObjectiveBenchmark(AbstractBenchmark):
class AbstractSingleObjectiveBenchmark(_BaseAbstractBenchmark):
"""
Abstract Benchmark class for multi-objective benchmarks.
The only purpose of this class is to point out to users that this benchmark returns multiple
objective function values.
Abstract Benchmark class for single-objective benchmarks.
This corresponds to the old AbstractBenchmark class.

The only purpose of this class is to point out to users that this benchmark returns only a single
objective function value.

When writing a benchmark, please make sure to inherit from the correct abstract class.
"""
@abc.abstractmethod
def objective_function(self, configuration: Union[ConfigSpace.Configuration, Dict],
fidelity: Union[Dict, ConfigSpace.Configuration, None] = None,
rng: Union[np.random.RandomState, int, None] = None,
**kwargs) -> Dict:
"""
Objective function.

Override this function to provide your multi-objective benchmark function. This
function will be called by one of the evaluate functions. For
flexibility, you have to return a dictionary with the only mandatory
key being `function_values`, the objective function values for the
`configuration` which was passed. By convention, all benchmarks are
minimization problems.

`function_value` is a dictionary that contains all available criteria.
@staticmethod
def _check_return_values(return_values: Dict) -> Dict:
"""
The return values should contain the fields `function_value` and `cost`.
"""
assert 'function_value' in return_values.keys()
assert 'cost' in return_values.keys()
return return_values

Parameters
----------
configuration : Dict
fidelity: Dict, None
Fidelity parameters, check get_fidelity_space(). Uses default (max) value if None.
rng : np.random.RandomState, int, None
It might be useful to pass a `rng` argument to the function call to
bypass the default "seed" generator. Only using the default random
state (`self.rng`) could lead to an overfitting towards the
`self.rng`'s seed.

Returns
-------
Dict
Must contain at least the key `function_value` and `cost`.
Note that `function_value` should be a Dict here.
"""
raise NotImplementedError()
# Ensure compatibility with older versions of the HPOBench
AbstractBenchmark = AbstractSingleObjectiveBenchmark

@abc.abstractmethod
def objective_function_test(self, configuration: Union[ConfigSpace.Configuration, Dict],
fidelity: Union[Dict, ConfigSpace.Configuration, None] = None,
rng: Union[np.random.RandomState, int, None] = None,
**kwargs) -> Dict:
"""
If there is a different objective function for offline testing, e.g
testing a machine learning on a hold extra test set instead
on a validation set override this function here.

Parameters
----------
configuration : Dict
fidelity: Dict, None
Fidelity parameters, check get_fidelity_space(). Uses default (max) value if None.
rng : np.random.RandomState, int, None
see :py:func:`~HPOBench.abstract_benchmark.objective_function`
class AbstractMultiObjectiveBenchmark(_BaseAbstractBenchmark):
"""
Abstract Benchmark class for multi-objective benchmarks.
The only purpose of this class is to point out to users that this benchmark returns multiple
objective function values.

Returns
-------
Dict
Must contain at least the key `function_value` and `cost`.
"""
raise NotImplementedError()
When writing a benchmark, please make sure to inherit from the correct abstract class.
"""

@staticmethod
def _check_return_values(return_values: Dict) -> Dict:
Expand Down
22 changes: 0 additions & 22 deletions hpobench/benchmarks/ml/__init__.py
Original file line number Diff line number Diff line change
@@ -1,22 +0,0 @@
from hpobench.benchmarks.ml.histgb_benchmark import HistGBBenchmark, HistGBBenchmarkBB, HistGBBenchmarkMF
from hpobench.benchmarks.ml.lr_benchmark import LRBenchmark, LRBenchmarkBB, LRBenchmarkMF
from hpobench.benchmarks.ml.nn_benchmark import NNBenchmark, NNBenchmarkBB, NNBenchmarkMF
from hpobench.benchmarks.ml.rf_benchmark import RandomForestBenchmark, RandomForestBenchmarkBB, \
RandomForestBenchmarkMF
from hpobench.benchmarks.ml.svm_benchmark import SVMBenchmark, SVMBenchmarkBB, SVMBenchmarkMF
from hpobench.benchmarks.ml.tabular_benchmark import TabularBenchmark

try:
from hpobench.benchmarks.ml.xgboost_benchmark import XGBoostBenchmark, XGBoostBenchmarkBB, XGBoostBenchmarkMF
except ImportError:
pass


__all__ = ['HistGBBenchmark', 'HistGBBenchmarkBB', 'HistGBBenchmarkMF',
'LRBenchmark', 'LRBenchmarkBB', 'LRBenchmarkMF',
'NNBenchmark', 'NNBenchmarkBB', 'NNBenchmarkMF',
'RandomForestBenchmark', 'RandomForestBenchmarkBB', 'RandomForestBenchmarkMF',
'SVMBenchmark', 'SVMBenchmarkBB', 'SVMBenchmarkMF',
'TabularBenchmark',
'XGBoostBenchmark', 'XGBoostBenchmarkBB', 'XGBoostBenchmarkMF',
]
Loading