-
Notifications
You must be signed in to change notification settings - Fork 358
Introduce ParamBasedTestProblem for benchmarking
#2675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D60996475 |
|
This pull request was exported from Phabricator. Differential Revision: D60996475 |
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
5da6a63 to
bb3adda
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2675 +/- ##
=======================================
Coverage 95.29% 95.29%
=======================================
Files 495 495
Lines 47832 47867 +35
=======================================
+ Hits 45582 45616 +34
- Misses 2250 2251 +1 ☔ View full report in Codecov by Sentry. |
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
|
This pull request was exported from Phabricator. Differential Revision: D60996475 |
bb3adda to
fd05117
Compare
…d --version 238912392 --accepted (facebook#2675) Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
|
This pull request was exported from Phabricator. Differential Revision: D60996475 |
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
fd05117 to
4a1358f
Compare
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
…book#2674) Summary: Pull Request resolved: facebook#2674 Context: This is an alternative to D61431979. Note: There are benchmarks that do not use `BenchmarkRunner`, but I plan to have them all use `BenchmarkRunner` in the future. `BenchmarkRunner` technically supports benchmarks without a ground truth, but that functionality is never used, and there aren't any Ax benchmarks that are noisy *and* don't have a ground truth. It is not conceptually clear how such a case should be benchmarked, so it is better to not over-engineer for that need, which may never arise. Instead, benchmarks that lack a ground truth but are deterministic can be treated as noiseless problems with a ground truth, and we can reap support for problems without a ground truth. Also, `BenchmarkRunner` has some methods that must either be defined or not defined depending on whether there is a ground truth. They can't be abstract because they will not always be defined. With this change, we can make the ground-truth methods abstract and get rid of the rest. This PR: - Rewrites docstrings - Removes method `get_Y_Ystd` - Makes `get_Y_true` and other methods abstract - Removes functionality for the case where `get_Y_true` raises a `NotImplementedError` Reviewed By: ItsMrLin Differential Revision: D61483962
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Reviewed By: Balandat Differential Revision: D60996475
|
This pull request was exported from Phabricator. Differential Revision: D60996475 |
4a1358f to
2c7a2ec
Compare
Summary: Pull Request resolved: facebook#2675 Context: In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like `BenchmarkRunner.evaluate_oracle(arm=arm)`, with the problem handling computation and the runner formatting results. However, the current `BenchmarkRunner` and `BenchmarkMetric` setup currently doesn't cover every benchmark. Consolidating on `BenchmarkRunner` and `BenchmarkMetric` will enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems. Current `BenchmarkRunner`s only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, a `TParameterization` is a more natural abstraction to handle parameters than a tensor. This PR: - Introduces `ParamBasedTestProblem`, which is like a BoTorch synthetic test problem but consumes a `TParameterization` rather than a tensor - Added `ParamBasedProblemRunner`, which shares a base class `SyntheticProblemRunner` and most functionality with `BotorchTestProblemRunner` (so it is a `BenchmarkRunner` and supports both observed and unboserved noise). Differential Revision: D60996475 Reviewed By: Balandat
|
This pull request has been merged in e746241. |
Summary:
Context:
In a future refactor that will enable more flexible and powerful best-point functionality, every BenchmarkProblem's runner will be able to produce an "oracle" value (possibly the ground truth) for any arm, in-sample or not, with a function like
BenchmarkRunner.evaluate_oracle(arm=arm), with the problem handling computation and the runner formatting results. However, the currentBenchmarkRunnerandBenchmarkMetricsetup currently doesn't cover every benchmark. Consolidating onBenchmarkRunnerandBenchmarkMetricwill enable the refactor, make it easier to universalize functionality like handling of constraints, noise, and inference regret, and will also allow for deleting some LOC for more custom problems.Current
BenchmarkRunners only handle problems that can consume tensor-valued arguments: BoTorch synthetic problems and surrogate problems. This isn't a good fit for problems like Jenatton that have a hierarchical search space and can have some parameters not passed. Because Ax always passes parameters and only sometimes represents them as tensors, aTParameterizationis a more natural abstraction to handle parameters than a tensor.This PR:
ParamBasedTestProblem, which is like a BoTorch synthetic test problem but consumes aTParameterizationrather than a tensorParamBasedProblemRunner, which shares a base classSyntheticProblemRunnerand most functionality withBotorchTestProblemRunner(so it is aBenchmarkRunnerand supports both observed and unboserved noise).Differential Revision: D60996475