Results can be consulted on https://benchopt.github.io/results/benchmark_bilevel.html
BenchOpt is a package to simplify and make more transparent and reproducible the comparisons of optimization algorithms. This benchmark is dedicated to solvers for bilevel optimization:
where
This benchmark currently implements two bilevel optimization problems: regularization selection, and hyper data cleaning.
In this problem, the inner function
where
The outer function
where the
There are currently two datasets for this regularization selection problem.
Homepage : https://archive.ics.uci.edu/dataset/31/covertype
This is a logistic regression problem, where the data is of the form
Homepage : https://www.openml.org/search?type=data&sort=runs&id=1575&status=active
This is a multicalss logistic regression problem, where the data is of the form
This problem was first introduced by [Fra2017] .
In this problem, the data is the MNIST dataset.
The training set has been corrupted: with a probability
where the
where the
This benchmark can be run using the following commands:
$ pip install -U benchopt $ git clone https://github.com/benchopt/benchmark_bilevel $ benchopt run benchmark_bilevel
Apart from the problem, options can be passed to benchopt run, to restrict the benchmarks to some solvers or datasets, e.g.:
$ benchopt run benchmark_bilevel -s solver1 -d dataset2 --max-runs 10 --n-repetitions 10
You can also use config files to setup the benchmark run:
$ benchopt run benchmark_bilevel --config config/X.yml
where X.yml is a config file. See https://benchopt.github.io/index.html#run-a-benchmark for an example of a config file. This will possibly launch a huge grid search. When available, you can rather use the file X_best_params.yml in order to launch an experiment with a single set of parameters for each solver.
Use benchopt run -h for more details about these options, or visit https://benchopt.github.io/api.html.
If you use this benchmark in your research project, please cite the following paper:
@inproceedings{saba, title = {A Framework for Bilevel Optimization That Enables Stochastic and Global Variance Reduction Algorithms}, booktitle = {Advances in {{Neural Information Processing Systems}} ({{NeurIPS}})}, author = {Dagr{\'e}ou, Mathieu and Ablin, Pierre and Vaiter, Samuel and Moreau, Thomas}, year = {2022} }
[Fra2017] | Franceschi, Luca, et al. "Forward and reverse gradient-based hyperparameter optimization." International Conference on Machine Learning. PMLR, 2017. |