A model calibration library currently under construction. Built for PyTorch models, this library enables users to evaluate their model's uncertainty estimates (probability estimates) using popular calibration metrics, train model wrappers that improve model calibration and generate data visualizations to identify where and how their model's are well calibrated or not.
ECE and MCE - Obtaining Well Calibrated Probabilities Using Bayesian Binning
SCE, ACE and TACE - Measuring Calibration in Deep Learning
Tempurature Scaling - On Calibration of Modern Neural Networks
Reliability Diagram and Confidence Histograms - On Calibration of Modern Neural Networks