Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor optimizer interface #218

Open
apaleyes opened this issue Jun 3, 2019 · 3 comments
Open

Refactor optimizer interface #218

apaleyes opened this issue Jun 3, 2019 · 3 comments

Comments

@apaleyes
Copy link
Collaborator

apaleyes commented Jun 3, 2019

Our optimizers have the following signature:

def optimize(self, x0: np.array, f: Callable=None, df: Callable=None, f_df: Callable=None)

where:

  • f is the function
  • df is the gradient of the function
  • f_df is the tuple of previous two

This gives raise to doubts: what to do when all three are given?, what to do when df and f_df are given? Which ones take priority?

This is the remnant of gpyopt, which was brought in as we were getting rid of gpyopt as a dependency. However we should try and refactor this stuff out, and make the interface less error prone. Most logical idea seems to be to remove f_df, but this is open to suggestions

@mmahsereci
Copy link
Contributor

Is there a reason that the method gets the callables instead of the optimizer object?

Regrading Andrei's comment, we could also have a boolean which is True if f also returns a gradient and false if not. like this we only have one callable, either f or f_df.

@marpulli
Copy link
Contributor

marpulli commented Jun 3, 2019

@mmahsereci do you mean the acquisition rather than optimizer object? This is leftover from GPyOpt where things like local penalization were implemented in a such a way that you were no longer optimizing an acquisition object but just an arbitrary python function. I'm not sure if we will need to do that in emukit.

I like the idea of passing in either f or f_df but we will still have to do some magic in the optimizers as the interface to scipy optimizers are inconsistent - the current lbfgs optimizer wants f_df whereas the trust region optimizer wants f and df separately.

@apaleyes
Copy link
Collaborator Author

apaleyes commented Jun 3, 2019

@marpulli magic inside is perfectly fine, if that's what scipy requires us to do. this is just an implementation detail. inconsistent/easy to misuse interface isn't

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants