-
Notifications
You must be signed in to change notification settings - Fork 48
Labels
Description
Wrap the gradient free optimizers from nevergrad in optimagic.
Nevergrad implements the following algorithms:
NgIohTunedis "meta"-optimizer which adapts to the provided settings (budget, number of workers, parametrization) and should therefore be a good default.TwoPointsDEis excellent in many cases, including very highnum_workers.PortfolioDiscreteOnePlusOneis excellent in discrete settings of mixed settings when high precision on parameters is not relevant; it's possibly a good choice for hyperparameter choice.OnePlusOneis a simple robust method for continuous parameters withnum_workers< 8.CMAis excellent for control (e.g. neurocontrol) when the environment is not very noisy (num_workers ~50 ok) and when the budget is large (e.g. 1000 x the dimension).TBPSAis excellent for problems corrupted by noise, in particular overparameterized (neural) ones; very highnum_workersok).PSOis excellent in terms of robustness, highnum_workersok.ScrHammersleySearchPlusMiddlePointis excellent for super parallel cases (fully one-shot, i.e.num_workers= budget included) or for very multimodal cases (such as some of our MLDA problems); don't use softmax with this optimizer.RandomSearchis the classical random search baseline; don't use softmax with this optimizer.
In the long run we want to wrap all of them, but if you are tackling this as your first issue you should focus on one. In that case please coment below which optimizer you are going to work on so we don't duplicate efforts.