Releases: meta-pytorch/botorch
Releases · meta-pytorch/botorch
Maintenance Release - Various Improvements across models, acquisition function optimization, PFN integration
Maintenance Release - Various Improvements across models, acquisition function optimization, PFN integration
Latest
Compatibility
- Require GPyTorch>=1.14.2 (#3055).
New Features
- Add
EnsembleMapSaasSingleTaskGP(#3035, #3038, #3040). - Allow different inferred noise levels for each task in
MultitaskGP(#2997). - Allow
LatentKroneckerGPmodel to support differentTvalues at train and test time (#3032, #3037). - Allow
qHypervolumeKnowledgeGradientto return log values for better numerical stability (#2974, #2976, #2979). - Add
NumericToCategoricalEncodinginput transform (#2907). - Added a
MatheronPathModel- aDeterministicModelreturning a Matheron path sample (#2984). - Project points generated in acquisition function optization to the feasible space (#3010).
- Add support for non-uniform model weights in
EnsembleModelandEnsemblePosterior(#2993). - Allow optimizers to support negative indices for fixed features (#2970).
- Add worst known feasible value to constrained test problems (#3016).
Bug Fixes
- Fix
optimize_acqf_mixed_alternatinginitialization with categorical features (#2986). - Use
IIDNormalSamplerforPosteriorListby default to fix issue with correlated Sobol samples (#2977). - Fix
condition_on_observationsto correctly apply input transforms and properly add data to train_inputs (#2989, #2990, #3034). - Fix handling of input transforms for
AdditiveMapSaasSingleTaskGP(#3042). - Preserve train inputs and targets through transforms (#3044).
- Improve how
qNEHVIhandles pending points to avoid duplicate suggestions when initial pending points are passed (#2985).
Other Changes
- Add support for missing tasks in multi-task GP models (#2960).
- Add input constructor for
LogConstrainedExpectedImprovement(#2973). - Improve error handling and update documentation for inter-point constraints (#3003).
- Make
AnalyticAcquisitionFunction._mean_and_sigma()return output dim consistently (#3028). - Improve initialization with continuous relaxation in
optimize_acqf_mixed_alternating(#3041). - Implement
ContextualDataset.__eq__()(#3005). - Check shape of state dict when comparing input transforms (#3051).
- Add
py.typedfile to precent tools complainnig about type stubs (#2982). - Improve best feasible objective computation; point user to use probability of feasibility (#3011).
Deprecations and removals
- Deprecate
get_fitted_map_saas_ensemble()in favor ofEnsembleMapSaasSingleTaskGP(#3036).
Changes to botorch_community
v0.15.1: Compatibility release
[0.15.1] -- Aug 12, 2025
This is a compatibility release, coming only one week after 0.15.0.
New features
- Enable optimizing a sequence of acquisition functions in
optimize_acqf(#2931).
Maintenance Release, Optimizer Improvements
New Features
- NP Regression Model w/ LIG Acquisition (#2683).
- Fully Bayesian Matern GP with dimension scaling prior (#2855).
- Option for input warping in non-linear fully Bayesian GPs (#2858).
- Support for
condition_on_observationsinFullyBayesianMultiTaskGP(#2871). - Improvements to
optimize_acqf_mixed_alternating:- Support categoricals in alternating optimization (#2866).
- Batch mixed optimization (#2895).
- Non-equidistant discrete dimensions for
optimize_acqf_mixed_alternating(#2923). - Update syntax for categoricals in
optimize_acqf_mixed_alternating(#2942). - Equality constraints for
optimize_acqf_mixed_alternating(#2944).
- Multi-output acquisition functions and related utilities:
- Batched L-BFGS-B for more efficient acquisition function optimization (#2870, #2892).
- Pathwise Thompson sampling for ensemble models (#2877).
- ROBOT tutorial notebook (#2883).
- Add community notebooks to the botorch.org website (#2913).
Bug Fixes
- Fix model paths in prior fitted networks (#2843).
- Fix a bug where input transforms were not applied in fully Bayesian models in train mode (#2859).
- Fix local
Yvs globalY_Trainingenerate_batchfunction in TURBO tutorial (#2862). - Fix CUDA support for
FullyBayesianMTGP(#2875). - Fix edge case with NaNs in
is_non_dominated(#2925). - Normalize for correct fidelity in
qLowerBoundMaxValueEntropy(#2930). - Bug: Botorch_community
VBLLModelposterior doesn't work with single value tensor (#2929). - Fix variance shape bug in Riemann posterior (#2939).
- Fix input constructor for
LogProbabilityOfFeasibility(#2945). - Fix
AugmentedRosenbrockproblem and expand testing for optimizers (#2950).
Other Changes
- Improved documentation for
optimize_acqf(#2865). - Fully Bayesian Multi-Task GP cleanup (#2869).
average_over_ensemble_modelsdecorator for acquisition functions (#2873).- Changes to I-BNN tutorial (#2889).
- Allow batched fixed features in gen_candidates_scipy and gen_candidates_torch (#2893)
- Refactor of
MultiTask/FullyBayesianMultiTaskGPto useProductKernelandIndexKernel(#2908). - Various changes to PFNs to improve Ax compatibility (#2915, #2940).
- Eliminate expensive indexing in
separate_mtmvn(#2920). - Added reset method to
StoppingCriterion(#2927). - Simplify closure dispatch (#2947).
- Add BaseTestProblem.is_minimization_problem property (#2949).
- Simplify NdarrayOptimizationClosure (#2951).
Maintenance Release, PFN integration, VBLL surrogates, Classifier-based constraint support
Highlights
- Prior Fitted Network (PFN) surrogate model integration (#2784).
- Variational Bayesian last-layer models as surrogate
Models (#2754). - Probabilities of feasibility for classifier-based constraints in acquisition functions (#2776).
New Features
- Helper for evaluating feasibility of candidate points (#2733).
- Allow for observation noise without provided
evaluation_maskmask inModelListGP(#2735). - Implement incremental
qLogNEIviaincrementalargument toqLogNoisyExpectedImprovement(#2760). - Add utility for computing AIC/BIC/MLL from a model (#2785).
- New test functions:
- Add parameter types to test functions to support problems defined in mixed / discrete spaces (#2809).
- Add input validation to test functions (#2829).
- Add
[q]LogProbabilityOfFeasibilityacquisition functions (#2815).
Bug Fixes
- Remove hard-coded
dtypefrombest_fbuffers (#2725). - Fix
dtype/nanissue inStratifiedStandardize(#2757). - Properly handle observed noise in
AdditiveMapSaasSingleTaskGPwith outcome transforms (#2763). - Do not count STOPPED (due to specified budget) as a model fitting failure (#2767).
- Ensure that
initialize_q_batchalways includes the maximum value when called in batch mode (#2773). - Fix posterior with observation noise in batched MTGP models (#2782).
- Detach tensor in
gen_candidates_scipyto avoid test failure due to new warning (#2797). - Fix batch computation in Pivoted Cholesky (#2823).
Other Changes
- Add optimal values for synthetic contrained optimization problems (#2730).
- Update
nonlinear_constraint_is_feasibleto return a boolean tensor (#2731). - Restructure sampling methods for info-theoretic acquisition functions (#2753).
- Prune baseline points in
qLogNEIby default (#2762). - Misc updates to MES-based acqusition functions (#2769).
- Pass option to reset submodules in train method for fully Bayesian models (#2783).
- Put outcome transforms into train mode in model constructors (#2817).
LogEI: selectcache_rootbased on model support (#2820).- Remove Ax dependency from BoTorch tutorials and reference Ax tutorials instead (#2839).
Deprecations and removals
Maintenance Release, Website Upgrade, BO with Relevance Pursuit, LatentKroneckerGP and MAP-SAAS Models
Highlights
- BoTorch website has been upgraded to utilize Docusaurus v3, with the API
reference being hosted by ReadTheDocs. The tutorials now expose an option to
open with Colab, for easy access to a runtime with modifiable tutorials.
The old versions of the website can be found at archive.botorch.org (#2653). RobustRelevancePursuitSingleTaskGP, a robust Gaussian process model that adaptively identifies
outliers and leverages Bayesian model selection (paper) (#2608, #2690, #2707).LatentKroneckerGP, a scalable model for data on partially observed grids, like the joint modeling
of hyper-parameters and partially completed learning curves in AutoML (paper) (#2647).- Add MAP-SAAS model, which utilizes the sparse axis-aligned subspace priors
(paper) with MAP model fitting (#2694).
Compatibility
- Require GPyTorch==1.14 and linear_operator==0.6 (#2710).
- Remove support for anaconda (official package) (#2617).
- Remove
mpmathdependency pin (#2640). - Updates to optimization routines to support SciPy>1.15:
New Features
- Add support for priors in OAK Kernel (#2535).
- Add
BatchBroadcastedTransformList, which broadcasts a list ofInputTransforms over batch shapes (#2558). InteractionFeaturesinput transform (#2560).- Implement
percentile_of_score, which takes inputsdataandscore, and returns the percentile of
values indatathat are belowscore(#2568). - Add
optimize_acqf_mixed_alternating, which supports optimization over mixed discrete & continuous spaces (#2573). - Add support for
PosteriorTransformtoget_optimal_samplesandoptimize_posterior_samples(#2576). - Support inequality constraints &
X_avoidinoptimize_acqf_discrete(#2593). - Add ability to mix batch initial conditions and internal IC generation (#2610).
- Add
qPosteriorStandardDeviationacquisition function (#2634). - TopK downselection for initial batch generation. (#2636).
- Support optimization over mixed spaces in
optimize_acqf_homotopy(#2639). - Add
InfeasibilityErrorexception class (#2652). - Support
InputTransforms inSparseOutlierLikelihoodandget_posterior_over_support(#2659). StratifiedStandardizeoutcome transform (#2671).- Add
centerargument toNormalize(#2680). - Add input normalization step in
Warpinput transform (#2692). - Support mixing fully Bayesian &
SingleTaskGPmodels inModelListGP(#2693). - Add abstract fully Bayesian GP class and fully Bayesian linear GP model (#2696, #2697).
- Tutorial on BO constrained by probability of classification model (#2700).
Bug Fixes
- Fix error in decoupled_mobo tutorial due to torch/numpy issues (#2550).
- Raise error for MTGP in
batch_cross_validation(#2554). - Fix
posteriormethod inBatchedMultiOutputGPyTorchModelfor tracing JIT (#2592). - Replace hard-coded double precision in test_functions with default dtype (#2597).
- Remove
as_tensorargument ofset_tensors_from_ndarray_1d(#2615). - Skip fixed feature enumerations in
optimize_acqf_mixedthat can't satisfy the parameter constraints (#2614). - Fix
get_default_partitioning_alphafor >7 objectives (#2646). - Fix random seed handling in
sample_hypersphere(#2688). - Fix bug in
optimize_objectivewith fixed features (#2691). FullyBayesianSingleTaskGP.trainshould not returnNone(#2702).
Other Changes
- More efficient sampling from
KroneckerMultiTaskGP(#2460). - Update
HigherOrderGPto use new priors & standardize outcome transform by default (#2555). - Update
initialize_q_batchmethods to return both candidates and the corresponding acquisition values (#2571). - Update optimization documentation with LogEI insights (#2587).
- Make all arguments in
optimize_acqf_homotopyexplicit (#2588). - Introduce
trial_indicesargument toSupervisedDataset(#2595). - Make optimizers raise an error when provided negative indices for fixed features (#2603).
- Make input transforms
Modules by default (#2607). - Reduce memory usage in
ConstrainedMaxPosteriorSampling(#2622). - Add
clonemethod to datasets (#2625). - Add support for continuous relaxation within
optimize_acqf_mixed_alternating(#2635). - Update indexing in
qLogNEI._get_samples_and_objectivesto support multiple input batches (#2649). - Pass
XtoOutcomeTransforms (#2663). - Use mini-batches when evaluating candidates within
optimize_acqf_discrete_local_search(#2682).
Deprecations
Increased robustness to dimensionality with updated hyperparameter priors
[0.12.0] -- Sep 17, 2024
Major changes
- Update most models to use dimension-scaled log-normal hyperparameter priors by
default, which makes performance much more robust to dimensionality. See
discussion #2451 for details. The only models that are not changed are those
for fully Bayesian models andPairwiseGP; for models that utilize a
composite kernel, such as multi-fidelity/task/context, this change only
affects the base kernel (#2449, #2450, #2507). - Use
Standarizeby default in all the models using the upgraded priors. In
addition to reducing the amount of boilerplate needed to initialize a model,
this change was motivated by the change to default priors, because the new
priors will work less well when data is not standardized. Users who do not
want to use transforms should explicitly pass inNone(#2458, #2532).
Compatibility
New features
- Introduce
PathwiseThompsonSamplingacquisition function (#2443). - Enable
qBayesianActiveLearningByDisagreementto accept a posterior
transform, and improve its implementation (#2457). - Enable
SaasPyroModelto sample via NUTS when training data is empty (#2465). - Add multi-objective
qBayesianActiveLearningByDisagreement(#2475). - Add input constructor for
qNegIntegratedPosteriorVariance(#2477). - Introduce
qLowerConfidenceBound(#2517). - Add input constructor for
qMultiFidelityHypervolumeKnowledgeGradient(#2524). - Add
posterior_transformtoApproximateGPyTorchModel.posterior(#2531).
Bug fixes
- Fix
batch_shapedefault inOrthogonalAdditiveKernel(#2473). - Ensure all tensors are on CPU in
HitAndRunPolytopeSampler(#2502). - Fix duplicate logging in
generation/gen.py(#2504). - Raise exception if
X_pendingis set on the underlyingAcquisitionFunction
in prior-guidedAcquisitionFunction(#2505). - Make affine input transforms error with data of incorrect dimension, even in
eval mode (#2510). - Use fidelity-aware
current_valuein input constructor forqMultiFidelityKnowledgeGradient(#2519). - Apply input transforms when computing MLL in model closures (#2527).
- Detach
fvalintorch_minimizeto remove an opportunity for memory leaks
(#2529).
Documentation
- Clarify incompatibility of inter-point constraints with
get_polytope_samples
(#2469). - Update tutorials to use the log variants of EI-family acquisition functions,
don't make tutorials passStandardizeunnecessarily, and other
simplifications and cleanup (#2462, #2463, #2490, #2495, #2496, #2498, #2499). - Remove deprecated
FixedNoiseGP(#2536).
Other changes
- More informative warnings about failure to standardize or normalize data
(#2489). - Suppress irrelevant warnings in
qHypervolumeKnowledgeGradienthelpers
(#2486). - Cleaner
botorch/acquisition/multi_objectivedirectory structure (#2485). - With
AffineInputTransform, always require data to have at least two
dimensions (#2518). - Remove deprecated argument
data_fidelitytoSingleTaskMultiFidelityGPand
deprecated modelFixedNoiseMultiFidelityGP(#2532). - Raise an
OptimizationGradientErrorwhen optimization produces NaN gradients (#2537). - Improve numerics by replacing
torch.log(1 + x)withtorch.log1p(x)
andtorch.exp(x) - 1withtorch.special.expm1(#2539, #2540, #2541).
Maintenance Release, I-BNN Kernel
Compatibility
New features
- Support evaluating posterior predictive in
MultiTaskGP(#2375). - Infinite width BNN kernel (#2366) and the corresponding tutorial (#2381).
- An improved elliptical slice sampling implementation (#2426).
- Add a helper for producing a
DeterministicModelusing a Matheron path (#2435).
Deprecations and Deletions
- Stop allowing some arguments to be ignored in acqf input constructors (#2356).
- Reap deprecated
**kwargsargument fromoptimize_acqfvariants (#2390). - Delete
DeterministicPosteriorandDeterministicSampler(#2391, #2409, #2410). - Removed deprecated
CachedCholeskyMCAcquisitionFunction(#2399). - Deprecate model conversion code (#2431).
- Deprecate
gp_samplingmodule in favor of pathwise sampling (#2432).
Bug Fixes
- Fix observation noise shape for batched models (#2377).
- Fix
sample_all_priorsto not sample one value for all lengthscales (#2404). - Make
(Log)NoisyExpectedImprovementcreate a correct fantasy model with
non-defaultSingleTaskGP(#2414).
Other Changes
Maintenance Release
New Features
- Implement
qLogNParEGO(#2364). - Support picking best of multiple fit attempts in
fit_gpytorch_mll(#2373).
Deprecations
- Many functions that used to silently ignore arbitrary keyword arguments will now
raise an exception when passed unsupported arguments (#2327, #2336). - Remove
UnstandardizeMCMultiOutputObjectiveandUnstandardizePosteriorTransform(#2362).
Bug Fixes
- Remove correlation between the step size and the step direction in
sample_polytope(#2290). - Fix pathwise sampler bug (#2337).
- Explicitly check timeout against
Noneso that0.0isn't ignored (#2348). - Fix boundary handling in
sample_polytope(#2353). - Avoid division by zero in
normalize&unnormalizewhen lower & upper bounds are equal (#2363). - Update
sample_all_priorsto support wider set of priors (#2371).
Other Changes
- Clarify
is_non_dominatedbehavior with NaN (#2332). - Add input constructor for
qEUBO(#2335). - Add
LogEIas a baseline in theTuRBOtutorial (#2355). - Update polytope sampling code and add thinning capability (#2358).
- Add initial objective values to initial state for sample efficiency (#2365).
- Clarify behavior on standard deviations with <1 degree of freedom (#2357).
Maintenance Release, SCoreBO
Compatibility
- Reqire Python >= 3.10 (#2293).
New Features
- SCoreBO and Bayesian Active Learning acquisition functions (#2163).
Bug Fixes
- Fix non-None constraint noise levels in some constrained test problems (#2241).
- Fix inverse cost-weighted utility behaviour for non-positive acquisition values (#2297).
Other Changes
- Don't allow unused keyword arguments in
Model.construct_inputs(#2186). - Re-map task values in MTGP if they are not contiguous integers starting from zero (#2230).
- Unify
ModelListandModelListGPsubset_outputbehavior (#2231). - Ensure
meanandinterior_pointofLinearEllipticalSliceSamplerhave correct shapes (#2245). - Speed up task covariance of
LCEMGP(#2260). - Improvements to
batch_cross_validation, support for model init kwargs (#2269). - Support custom
all_tasksfor MTGPs (#2271). - Error out if scipy optimizer does not support bounds / constraints (#2282).
- Support diagonal covariance root with fixed indices for
LinearEllipticalSliceSampler(#2283). - Make
qNIPVa subclass ofAcquisitionFunctionrather thanAnalyticAcquisitionFunction(#2286). - Increase code-sharing of
LCEMGP& defineconstruct_inputs(#2291).
Deprecations
- Remove deprecated args from base
MCSampler(#2228). - Remove deprecated
botorch/generation/gen/minimize(#2229). - Remove
fit_gpytorch_model(#2250). - Remove
requires_grad_ctx(#2252). - Remove
base_samplesargument ofGPyTorchPosterior.rsample(#2254). - Remove deprecated
mvnargument toGPyTorchPosterior(#2255). - Remove deprecated
Posterior.event_shape(#2320). - Remove
**kwargs& deprecatedindicesargument ofRoundtransform (#2321). - Remove
Standardize.load_state_dict(#2322). - Remove
FixedNoiseMultiTaskGP(#2323).