Flexible tool for bias detection, visualization, and mitigation
-
Updated
Aug 29, 2022 - R
Flexible tool for bias detection, visualization, and mitigation
R package for computing and visualizing fair ML metrics
Multi-Calibration & Multi-Accuracy Boosting for R
R-package ffscb: fast n' fair simultaneous confidence bands for functional parameters. The statistical theory and methodology is described in our paper https://arxiv.org/abs/1910.00131. A description of the functions in our R-package can be found at www.dliebl.com/ffscb/.
This project Implements the paper “Robustness implies Fairness in Casual Algorithmic Recourse” using the R language.
Fair data adaptation using causal graphical models with R.
Accompanying code for the paper titled Anti Discrimination Laws AI and Gender Bias A Case Study in Nonmortgage Fintech Lending
Replication Package for Zezulka and Genin (2024). From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment.
This repository provides some group fairness metrics to Machine Learning classifier of German Credit Scoring Dataset. It computes demographic parity, equal opportunity and equalized odd for the sensitive variable gender.
Automatic Location of Disparities (ALD) for algorithmic audits.
Add a description, image, and links to the fairness topic page so that developers can more easily learn about it.
To associate your repository with the fairness topic, visit your repo's landing page and select "manage topics."