-
Notifications
You must be signed in to change notification settings - Fork 4
Use log_loss as tuning objective #197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
5b66473 to
acd5efe
Compare
Summary: Pull Request resolved: facebookincubator#197 Use log_loss from sklearn instead of normalized_entropy as the tuning objective. This matches MCGrad's early stopping metric, providing directly comparable outputs between tuning and model training. The log_loss metric is a standard choice for probabilistic predictions and is widely understood. Differential Revision: D91884587
0c9ea67 to
80f26fd
Compare
Summary: Use log_loss from sklearn instead of normalized_entropy as the tuning objective. This matches MCGrad's early stopping metric, providing directly comparable outputs between tuning and model training. The log_loss metric is a standard choice for probabilistic predictions and is widely understood. Reviewed By: Lorenzo-Perini Differential Revision: D91884587
Summary: Add a `use_model_predictions` parameter to `tune_mcgrad_params` with `False` as the default. When False, returns the actual observed best trial parameters. When True, uses the Bayesian surrogate model's predicted best parameters. The default is False because with few tuning trials, the surrogate model may not be well-calibrated and could return suboptimal parameters that don't match the actual best observed trial. Users who want the model-predicted best can set this to True. Differential Revision: D91889101
Summary: Use log_loss from sklearn instead of normalized_entropy as the tuning objective. This matches MCGrad's early stopping metric, providing directly comparable outputs between tuning and model training. The log_loss metric is a standard choice for probabilistic predictions and is widely understood. Reviewed By: Lorenzo-Perini Differential Revision: D91884587
80f26fd to
b5ce578
Compare
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #197 +/- ##
=======================================
Coverage ? 94.92%
=======================================
Files ? 9
Lines ? 1753
Branches ? 0
=======================================
Hits ? 1664
Misses ? 89
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Summary:
Use log_loss from sklearn instead of normalized_entropy as the tuning objective.
This matches MCGrad's early stopping metric, providing directly comparable outputs
between tuning and model training. The log_loss metric is a standard choice for
probabilistic predictions and is widely understood.
Differential Revision: D91884587