Hyperparameter tuning & Experiment tracking #6709
Replies: 2 comments
-
I worked with allegro trains before, and it works if you can set it up properly. https://allegro.ai/docs/examples/optimization/hyper-parameter-optimization/examples_hyperparam_opt/ |
Beta Was this translation helpful? Give feedback.
-
@athenawisdoms I usually use Hydra for configuring my experiments - it is currently integrated with 3 plugins allowing for hyperparameter search: Optuna, Ax and Nevergrad. The advantage is that the whole hyperparameter search is defined in a single config file - you don't need to pollute your pipeline with boilerplate code and you don't need to write a loop for executing experiments one after the other, as hydra does that for you. You can see example in the template I'm developing here: As for analyzing results, I just use some lightning logger like wandb, neptune or mlflow and group runs in the UI. |
Beta Was this translation helpful? Give feedback.
-
Hi, after I have came up with a model in Pytorch Lightning that I am starting to like, the next step will be to perform hyperparameter tuning. What are some of the preferred solutions for Pytorch Lightning that allows you to:
I've used Ray Tune with vanilla Pytorch before, but I'm still looking for more options.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions