-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inducing point selectors #511
Inducing point selectors #511
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just a few minor comments.
self.model.q_sqrt.assign(new_q_sqrt) # [L, N, N] | ||
|
||
if isinstance(self.model.inducing_variable, SharedIndependentInducingVariables): | ||
self.model.inducing_variable.inducing_variable.Z.assign(new_inducing_points) # [M, D] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the bulk of this logic specific to SparseVariational
or is it the same in all SupportsGetInducingVariables
models? If so, should we perhaps update SupportsGetInducingVariables
with an abstract set_inducing_variables(new_inducing_points, new_q_mu, new_q_sqrt)
method, and a concrete update_inducing_variables(new_inducing_points)
method which calculates the new q values and calls it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these are good questions, quite relevant as we need to move SGPR to its own wrapper...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont know how the SGPR works. In fact there are lots of issues with using it at the moment. Namely, its trajectory_sampler, conditonal_predict* methods, and covariance_between_points all are assuming that it is a GPR. I think we should get rid of it.
In summary, we never use it and none of the stuff thats meant to work with it does
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wont make these changes in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks ok to me...
- I would move some code to utils.py
- the other major comment is to give it a think of how much of this can be reused in an SGPR wrapper, and then potentially move things so that the code can be easily reused in a future SGPR wrapper
self.model.q_sqrt.assign(new_q_sqrt) # [L, N, N] | ||
|
||
if isinstance(self.model.inducing_variable, SharedIndependentInducingVariables): | ||
self.model.inducing_variable.inducing_variable.Z.assign(new_inducing_points) # [M, D] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these are good questions, quite relevant as we need to move SGPR to its own wrapper...
New functionality that allows us to dynamically fiddle with the inducing points used by our
SparseVariational
models as the optimization progresses. At the moment we only support using the same inducing points throughout (as specified when building the underlying gpflowSVGP
model).model.update
). I have added in new unit tests to make sure this doesn't happen again. So to be clear, we support models with independent inducing points, we jsut dont have functionality to update their points as the optimization progresses.