Skip to content

Adds NotImplementedError for bug (#2183) in qMultiFidelityLowerBoundMaxValueEntropy class #2193

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

AlexanderMouton
Copy link
Contributor

Motivation

Serves as a quick "fix" to issue #2183

Have you read the Contributing Guidelines on pull requests?

Yes.

Test Plan

No test plan (other than running all unit tests) as only a single if statement that raises a NotImplementedError when applicable was added.

Related PRs

None.

`qMultiFidelityLowerBoundMaxValueEntropy` (see pytorch#2183)
@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Feb 7, 2024
@esantorella esantorella self-assigned this Feb 7, 2024
Copy link
Member

@esantorella esantorella left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this! Could you add a unit test for this error? BoTorch requires 100% test coverage. The test could be similar this line and could go here.

I also realized there's a cleaner way of doing this. Rather than raising a NotImplementedError, qMultiFidelityLowerBoundMaxValueEntropy could just not accept an argument X_pending. That's substantially more verbose, but would avoid the unexpectedness of having a superclass reference its subclass. It would require adding an __init__ method to qMultiFidelityLowerBoundMaxValueEntropy, which could look like this:

    def __init__(
        self,
        model: Model,
        candidate_set: Tensor,
        num_fantasies: int = 16,
        num_mv_samples: int = 10,
        num_y_samples: int = 128,
        posterior_transform: Optional[PosteriorTransform] = None,
        use_gumbel: bool = True,
        maximize: bool = True,
        cost_aware_utility: Optional[CostAwareUtility] = None,
        project: Callable[[Tensor], Tensor] = lambda X: X,
        expand: Callable[[Tensor], Tensor] = lambda X: X,
    ) -> None:
        r"""Single-outcome max-value entropy search acquisition function.

        Args:
            model: A fitted single-outcome model.
            candidate_set: A `n x d` Tensor including `n` candidate points to
                discretize the design space, which will be used to sample the
                max values from their posteriors.
            cost_aware_utility: A CostAwareUtility computing the cost-transformed
                utility from a candidate set and samples of increases in utility.
            num_fantasies: Number of fantasies to generate. The higher this
                number the more accurate the model (at the expense of model
                complexity and performance) and it's only used when `X_pending`
                is not `None`.
            num_mv_samples: Number of max value samples.
            num_y_samples: Number of posterior samples at specific design point `X`.
            posterior_transform: A PosteriorTransform. If using a multi-output model,
                a PosteriorTransform that transforms the multi-output posterior into a
                single-output posterior is required.
            use_gumbel: If True, use Gumbel approximation to sample the max values.
            maximize: If True, considr the problem a maximization problem.
            cost_aware_utility: A CostAwareUtility computing the cost-transformed
                utility from a candidate set and samples of increases in utility.
            project: A callable mapping a `batch_shape x q x d` tensor of design
                points to a tensor of the same shape projected to the desired
                target set (e.g. the target fidelities in case of multi-fidelity
                optimization).
            expand: A callable mapping a `batch_shape x q x d` input tensor to
                a `batch_shape x (q + q_e)' x d`-dim output tensor, where the
                `q_e` additional points in each q-batch correspond to
                additional ("trace") observations.
        """
        super().__init__(
            model=model,
            candidate_set=candidate_set,
            num_fantasies=num_fantasies,
            num_mv_samples=num_mv_samples,
            posterior_transform=posterior_transform,
            use_gumbel=use_gumbel,
            maximize=maximize,
            cost_aware_utility=cost_aware_utility,
            project=project,
            expand=expand,
        )

@facebook-github-bot
Copy link
Contributor

@esantorella has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Instead of raising a `NotImplementedError` when the pending points are
passed to the `qMultiFidelityLowerBoundMaxValueEntropy` class, we simply
remove `X_pending` as an argument.
@AlexanderMouton
Copy link
Contributor Author

Hi @esantorella

Thanks for the feedback - that's a much cleaner suggestion that also doesn't require adding unit tests!

Regards,
Alex

@facebook-github-bot
Copy link
Contributor

@esantorella has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Member

@esantorella esantorella left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@facebook-github-bot
Copy link
Contributor

@esantorella merged this pull request in eef7c96.

stefanpricopie pushed a commit to stefanpricopie/botorch that referenced this pull request Feb 27, 2024
…rBoundMaxValueEntropy class (pytorch#2193)

Summary:
<!--
Thank you for sending the PR! We appreciate you spending the time to make BoTorch better.

Help us understand your motivation by explaining why you decided to make this change.

You can learn more about contributing to BoTorch here: https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md
-->

## Motivation

Serves as a quick "fix" to [issue https://github.com/pytorch/botorch/issues/2183](https://github.com/pytorch/botorch/issues/2183)

### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)?

Yes.

Pull Request resolved: pytorch#2193

Test Plan:
No test plan (other than running all unit tests) as only a single `if` statement that raises a `NotImplementedError` when applicable was added.

## Related PRs

None.

Reviewed By: saitcakmak

Differential Revision: D53517616

Pulled By: esantorella

fbshipit-source-id: be3e65358d449a9aedeac1a2f8c8126519845dbb
@AlexanderMouton AlexanderMouton deleted the mflbmve_bug branch April 16, 2024 07:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants