Skip to content

Conversation

@ShubhamSRath
Copy link
Contributor

Enhance experimental design using GP based uncertainty to sample training points

@eckelsjd
Copy link
Owner

First, Can you please address the following:

  1. Only new commits with new changes should be included on the PR -- most of these were already merged in feat: adds GPR for interpolation #49. To fix, either cherry pick the new commits onto a branch off main, or rebase onto main. Each commit should specify what it adds, and none should have to do with the original GPR.
  2. Relatedly, there seem to be some changes to previous test cases -- these should be reverted since they are not affected by uncertainty sampling.

Then, there are some design issues here:

  1. Nearly all of the new UncertaintySampling training data class seems to be copied from SparseGrid.
  2. The if instance blocks in Component.activate_index are not great -- you can imagine that this will grow quickly in complexity to handle all the different ways of mixing and matching Interpolators and TrainingData.

To resolve these issues, can you sketch up some slides that describe exactly the new desired behavior, and then how you want your new class to handle it? Specifically show what data needs to be passed around to enable the new functionality. For example, it is okay if we need to change the call signature of TrainingData.refine so that it gets information from the Interpolator, or perhaps we move some of the GPR functionality out of the Interpolator class and into a new TrainingData class. Most importantly, the inner workings of Interpolators and TrainingData should be hidden from Component.activate_index, i.e. it should work regardless of the underlying classes and so we shouldn't change how Component behaves when either the interpolator or training data change.

Before coding something up in detail, please provide an overview of the design first. This is on the right track, but we need to fix Component.activate_index and the large duplication of SparseGrid.

@ShubhamSRath
Copy link
Contributor Author

  1. Most of the code is similar to SparseGrid since I am trying to keep the same tensor product structure. The only change is the sampling_1d function which basically optimizes for posterior variance from the GP Model. Leja sampling is also included as a fallback in case a GP model is not found (which is the case for the first iteration atleast). So ideally the only difference to TrainingData.refine would be passing the GP model into it.

  2. in Component.activate_index, i included the isinstance block since Uncertainty based sampling only works if the interpolator chosen is GPR. In case Lagrange is selected we wouldnt have a GP model to sample from.

I will try cleaning up the UncertaintySampling code a bit more but please advise on how to deal with the Component.activate_index function. I am also attaching the writeup wherein I proposed the UncertaintySampling framework for your reference.
amisc_uncertainty.pdf

@ShubhamSRath ShubhamSRath deleted the dev branch October 22, 2025 22:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants