A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
Oct 16, 2025 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering
[ICML 2022] Channel Importance Matters in Few-shot Image Classification
Estimation and inference from generalized linear models using explicit and implicit methods for bias reduction
Methods for M-estimation of statistical models
This repository contains the experiments conducted in the ICLR 2022 spotlight paper "On the Importance of Firth Bias Reduction in Few-Shot Classification".
Bias correction command-line tool for climatic research written in C++
Bias reduction in quasi likelihood estimation
Bias detection Toolkit: Chrome Extension, Python Package, SOTA research paper docs.
Tensorflow implementation of Learning Not to Learn (CVPR 2019)
This repository contains the firth bias reduction experiments on the few-shot distribution calibration method conducted in the ICLR 2022 spotlight paper "On the Importance of Firth Bias Reduction in Few-Shot Classification".
Sampling algorithms and machine learning models to reduce bias and predict credit risk.
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024), ECAI 2024 - 27th European conference on AI
Apply empirical bias-reduced methods to fit a variety of latent variable models
A small and simple prototype designed to alert users of the bias of the news source.
Nexus.ai is a secure, vendor-neutral AI orchestration engine. It lets multiple LLMs and web search debate a question, then ranks and reconciles outputs to reduce bias and surface the best supported answer (with citations/media). All I/O is encrypted (AES-256), every step is logged for auditability, and the project ships without API keys by default.
A method to preprocess the training data, producing an adjusted dataset that is independent of the group variable with minimum information loss.
unbiased toxicity detection from comments
Add a description, image, and links to the bias-reduction topic page so that developers can more easily learn about it.
To associate your repository with the bias-reduction topic, visit your repo's landing page and select "manage topics."