Stars
Combination of transformers and diffusion models for flexible all-in-one simulation-based inference
PyTorch per step fault tolerance (actively under development)
Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to over-fitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs…
Code for "Deep Convolutional Networks as shallow Gaussian Processes"
Code to accompany the paper 'On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes'
Code for "Deep Convolutional Networks as shallow Gaussian Processes"
Deep GPs built on top of TensorFlow/Keras and GPflow
A primer on Bayesian Neural Networks. The aim of this reading list is to facilitate the entry of new researchers into the field of Bayesian Deep Learning, by providing an overview of key papers. Mo…
Bayesian Neural Field models for prediction in large-scale spatiotemporal datasets
Contains the code and data for reproducing the results of the paper "Validation and Comparison of Non-Stationary Cognitive Models: A Diffusion Model Application".
Pretty Pie Log: A powerful, thread-safe Python logging library featuring colorized output, structured logging, timezone-aware timestamps, rotating file logs, and function execution tracking with en…
FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
A modern cookiecutter template for Python projects that use uv for dependency management
A reactive notebook for Python — run reproducible experiments, execute as a script, deploy as an app, and version with git.
🚀 Practical and document place for DevOps toolchain
Markov chain Monte Carlo general, and Hamiltonian Monte Carlo specific, diagnostics for Stan
Image stacking, astrometry, and photometry for MegaCam/WIRCam on the Canada-France-Hawaii Telescope
A JAX research toolkit for building, editing, and visualizing neural networks.
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
Bind any function written in another language to JAX with support for JVP/VJP/batching/jit compilation
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
You like pytorch? You like micrograd? You love tinygrad! ❤️
[ICML 2024 Spotlight] FiT: Flexible Vision Transformer for Diffusion Model