Replies: 1 comment
-
I can't say I have a great understanding of the nuances of the active learning acquisition functions and why you see this behavior (or whether that is surprising), but let me share some thoughts:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Issue description
I am working on active learning and have tested the
PosteriorStandardDeviation
(PSTD
) and theqNegIntegratedPosteriorVariance
(NIPV
) acquisition function on the Hartmann 6 test function.Active learning involves some randomness, therefore, I did 100 Monte-Carlo iterations.
As you see in the figure below, the leave-one-out error (errLOO) is lower with
PSTD
than withNIPV
criterion. Moreover, the variance withPSTD
is smaller than that ofNIPV
. I also did the same exercise with the space-filling Latin-Hypercube sampling (LHS
), which shows a noisy result.I believe that the
NIPV
criterion is one benchmark in the active learning literature. Thus, I want to understand possible reasons why in my implementation, thePTSD
criteria works better than 'NIPV'. At least 'PTSD' shows more robust convergence. Maybe my hyper-parameter setting foroptimize_acqf
is not good or maybeNIPV
cannot correctly integrate variance. I use thedraw_sobol_samples
helper function to generate points to be integrated.Thank you very much for your insights on this.
This discussion should be closely related to #1366 and #2060.
Code example
System Info
Please provide information about your setup, including
Beta Was this translation helpful? Give feedback.
All reactions