Chaospy - Toolbox for performing uncertainty quantification.
-
Updated
Jul 1, 2025 - Python
Chaospy - Toolbox for performing uncertainty quantification.
Providing reproducibility in deep learning frameworks
Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)
Framework to model two stage stochastic unit commitment optimization problems.
Code the ICML 2024 paper: "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"
In this paper, we propose Filter Gradient Decent (FGD), an efficient stochastic optimization algorithm that makes a consistent estimation of the local gradient by solving an adaptive filtering problem with different designs of filters.
PyTorch implementation for " Differentiable Antithetic Sampling for Variance Reduction in Stochastic Variational Inference" (https://arxiv.org/abs/1810.02555).
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. NeurIPS, 2022
Reproduced PyTorch implementation for ICML 2017 Paper "Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning."
Numerical integration of SDEs with variance reduction methods for Monte Carlo simulation
Implementation and brief comparison of different First Order and different Proximal gradient methods, comparison of their convergence rates
Statistical toolkit to make time-series stationary
A financial options pricing and analysis library.
Add a description, image, and links to the variance-reduction topic page so that developers can more easily learn about it.
To associate your repository with the variance-reduction topic, visit your repo's landing page and select "manage topics."