A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
Dec 10, 2024 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
XAI - An eXplainability toolbox for machine learning
Bias Auditing & Fair ML Toolkit
Can we use explanations to improve hate speech models? Our paper accepted at AAAI 2021 tries to explore that question.
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Automatic synthesis of RCTs
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Identify bias and measure fairness of your data
Bluetooth Impersonation AttackS (BIAS) [CVE 2020-10135]
A reading list and fortnightly discussion group designed to provoke discussion about ethical applications of, and processes for, data science.
A Python toolkit for analyzing machine learning models and datasets.
NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering
Symmetric evaluation set based on the FEVER (fact verification) dataset
Compass-aligned Distributional Embeddings. Align embeddings from different corpora
A fairness library in PyTorch.
[CCKS 2021] On Robustness and Bias Analysis of BERT-based Relation Extraction
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.
Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration
Bias correction method using quantile mapping
Add a description, image, and links to the bias topic page so that developers can more easily learn about it.
To associate your repository with the bias topic, visit your repo's landing page and select "manage topics."