How to use LinearTruncatedFidelityKernel in Multi-Fidelity optimization #2706
Replies: 2 comments
-
Yes, that is the inductive bias of the kernel (though it's been a while since I've thought through this in great detail).
Not directly, but it shouldn't be hard to implement a modified kernel that encodes this knowledge.
I don't think this exact kernel has a reference, but it is a variant of the multi-fidelity kernels discussed in Section 5.3 of https://arxiv.org/pdf/1903.04703 |
Beta Was this translation helpful? Give feedback.
-
Hi i also have a question regarding the LinearTruncatedFidelityKernel : is this kernel really positive definite and a therefore valid kernel for GP's? with:
And when evaluating psd'ness of the kernel it is sufficient to look at this this part: Now the polynomial kernel is psd for positive natural numbers. But the power hyperparameter is allowed to be a positive real number. Therefore im asking if the kernel is truely psd? Im asking all this since i am getting a covariance matrix for test data that is not psd and i therefore cannot compute outputs for the my training data is:
and this code gets me the negative eigenvalues for the covariance matrix of the LinearTruncatedFidelityKernel:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am doing multi-fidelity optimization using the
SingleTaskMultiFidelityGP
with default options. Reading through the documentation ofLinearTruncatedFidelityKernel
, I have a few questions:Beta Was this translation helpful? Give feedback.
All reactions