Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HIPPO: HIghly Parallelizable Pareto Optimization #519

Merged
merged 10 commits into from
Mar 14, 2022

Conversation

apaleyes
Copy link
Collaborator

@apaleyes apaleyes commented Feb 28, 2022

Welcome Hippo (and thanks Henry for suggesting the name)! It is a penalization method for collecting cheap batches in multi-objective optimization setting. The core idea is to penalize a point by its distance to already collected batch. The details shall be clear in comments and docstrings, so please flag if they aren't!

We have a pretty good idea of Hippo's performance, see this link for some benchmarking (MO LP was the working title): https://github.com/apaleyes/trieste/blob/mo-lp/docs/notebooks/mo_lp/plot_results.ipynb

Implementation detail. Since we want Hippo to work with both EHVI and ECHVI, it isn't implemented as a SingleModel... builder, instead we ask the user to provide the tag to be penalized, and use OBJECTIVE by default.

Copy link
Collaborator

@henrymoss henrymoss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, a few comments

@@ -379,3 +384,245 @@ def _update_expected_improvement_fn(
isinstance(self._expected_improvement_fn, expected_hv_improvement), []
)
self._expected_improvement_fn.update(_partition_bounds) # type: ignore


class HippoPenalizationAcquisitionFunction(GreedyAcquisitionFunctionBuilder[ProbabilisticModel]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anyway of inheriting from the local penalization builder? It looks very similar to yours

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly, because it has some bits we don't want, like penalizer choice (we don't have a choice) and sampling (which we don't do). But we could maybe consider moving their common bits out?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@henrymoss henrymoss requested a review from uri-granta March 1, 2022 10:35
Copy link
Collaborator

@uri-granta uri-granta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial comments/questions. The later ones sometimes affect the earlier ones so please read them all before you make any changes (sorry!).


class HippoPenalizationAcquisitionFunction(GreedyAcquisitionFunctionBuilder[ProbabilisticModel]):
r"""
HIPPO: HIghly Parallelizable Pareto Optimization
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inspired name! Maybe @henrymoss can come up with a backronym for trieste?

@@ -379,3 +384,245 @@ def _update_expected_improvement_fn(
isinstance(self._expected_improvement_fn, expected_hv_improvement), []
)
self._expected_improvement_fn.update(_partition_bounds) # type: ignore


class HippoPenalizationAcquisitionFunction(GreedyAcquisitionFunctionBuilder[ProbabilisticModel]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect you want to make this generic in the type of supported models. That way you can pass in a base builder that doesn't support all model types and still use this for those types of model.

Suggested change
class HippoPenalizationAcquisitionFunction(GreedyAcquisitionFunctionBuilder[ProbabilisticModel]):
class HippoPenalizationAcquisitionFunction(GreedyAcquisitionFunctionBuilder[ProbabilisticModelType]):

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this change, but the other types stayed as ProbabilisticModel, and I cannot fully claim understanding why it would't work with ProbabilisticModelType the whole way. Scanning other Trieste code didn't bring any clarity. Can you enlighten me please, where which type is supposed to go?

Copy link
Collaborator

@uri-granta uri-granta Mar 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think of ProbabilisticModelType as meaning "a specific type of model", while ProbabilisticModel means "any model". Hence making HIPPO a GreedyAcquisitionFunctionBuilder[ProbabilisticModelType] is saying it might only support specific model types. Doing this only makes sense if you also change the base_acquisition_function_builder to be AcquisitionFunctionBuilder[ProbabilisticModelType]. That allows you to pass in a base builder that only supports certain model types, as long as you only use HIPPO with those model types.

At the very least you'll want to change all mentions of ProbabilsiticModel in HIPPO (e.g. in prepare_acquisition_function) to ProbabilsticModelType. If you're getting typing errors feel free to message me and I can take a look.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still not 100% clear with this difference. "a specific type of model" - which type is it? That's only known at the time when the user declares a variable of type HIPPO, and then they will replace the generic with a concrete type. That's happening the same way regardless of which generic type we used, no?

In other words, as a user I can write:

hippo: HIPPO[TrainableProbabilisticModel] = build_model()

regardless of whether HIPPO was declared with ProbabilisticModelType or ProbabilisticModel. If so, what's the difference between the two?

All that said, we shouldn't restrict HIPPO to any particular type of models, since the goal here is to make it general. Does this mean we shall use ProbabilisticModel throughout?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Upon further thinking, I think I get it, @uri-granta. This code would be valid when it should not be?

class HIPPO[ProbabilisticModel]:
    self.base_builder: AcquisitionBuilder[ProbabilisticModel]
    
    def __init__(self, base_builder):
        self.base_builder = base_builder

ehvi = EHVI[B]()
hippo = HIPPO[A](envi)

whereas if we were to declare HIPPO like that

class HIPPO[ProbabilisticModelType]

the above code would indeed be caught by mypy as invalid, right? Because the types of base builder and hippo don't match, and ProbabilisticModelType controls for it.

Is that understanding correct? If so, I certainly agree we shall use ProbabilisticModelType to ensure penalization and base functions are typed the same way

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't use ProbabilisticModelType then mypy should complain if we try to pass in a base builder like AugmentedExpectedImprovement or PredictiveVariance, since those builders don't support all ProbabilisticModels.

Would you like me to have a go at updating this so you can see what I mean?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it!

We are only interested in multi-objective non-batch builders, which support any models at the moment (EHVI and ECHVI). But clearly we don't want to be restrictive in case later on we have some multi-objective builders that only support a subset of models. I guess it's worth to use ProbabilisticModelType here.

Thanks for the offer. I'll give it another go first, watch out for the update (likely tomorrow).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

if new_optimization_step:
self._update_base_acquisition_function(models, datasets)

if pending_points is None or len(pending_points) == 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @henrymoss noted, this does share a fair amount of code with LocalPenalization. Annoyingly, making this generic in the type will make it a little bit more difficult to just inherit from LocalPenalization. Feel free to prioritise inheriting, and I can always make both classes generic in a separate PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we shall inherit at all, because "is a" semantics doesn't seem to apply here. For instance, at the moment we consider HIPPO multi-objective only, while LP is single objective only.

I am more inclined to have a common base which both inherit. But as you say typing might make it very difficult to have such a base. THoughts?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the main thing that makes it difficult to have a base class is the fact that LocalPenalization is a SingleModelGreedyAcquisitionBuilder while HIPPO is a GreedyAcquisitionFunctionBuilder. I would leave this for now. We can always clean it up later.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, didn't touch it yet, and won't, thanks!

@apaleyes apaleyes requested review from uri-granta and henrymoss March 2, 2022 17:36
@henrymoss
Copy link
Collaborator

Okay, so is this ready to go?

@apaleyes
Copy link
Collaborator Author

apaleyes commented Mar 9, 2022

Hopefully! But I'd like @uri-granta to approve before merging, just to make sure the typing is all correct

Copy link
Collaborator

@uri-granta uri-granta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@apaleyes apaleyes merged commit 2eee787 into secondmind-labs:develop Mar 14, 2022
@apaleyes apaleyes deleted the mo-lp-pr branch March 14, 2022 10:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants