Skip to content

AI-powered bias detection for datasets and ML models — with fairness metrics, natural language reports, and explainability tools.

Notifications You must be signed in to change notification settings

ArushKachru/BiasGuard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BiasGuard 🔍

AI-powered Bias Detection Tool for datasets and ML models — provides fairness metrics, natural language reports, and explainability tools.


📌 Overview

BiasGuard helps data scientists, developers, and product teams detect and understand bias in machine learning models and datasets. It evaluates sensitive features like gender or race and generates both technical fairness metrics and easy-to-understand natural language summaries, making AI bias transparent for all stakeholders.

BiasGuard was designed to combine Python programming, ML techniques, and explainable AI principles, giving users actionable insights while promoting responsible AI development.


🚀 Features

  • Dataset analysis (rows, columns, missing values, class distribution)
  • Fairness metrics: Demographic Parity Difference, Equalized Odds Difference
  • Natural language bias reports
  • Works with CSV datasets

🛠 Tech Stack

  • Python – core programming language
  • Scikit-learn – ML models and evaluation
  • Fairlearn – fairness metrics
  • Pandas / NumPy – data handling and manipulation

📖 Background & Learnings

BiasGuard was developed to explore ethical AI and fairness in ML models. Key takeaways from the project:

  • Applying fairness metrics to real-world datasets
  • Communicating complex technical results in clear, actionable language
  • Designing tools that promote responsible AI usage

This project also demonstrates developer advocacy skills, by bridging the gap between technical analysis and clear communication for non-technical audiences.


🔮 Future Work

  • Add a visualization dashboard with Streamlit
  • Extend bias detection to include LLM outputs
  • Build a BiasGuard AI agent that can autonomously evaluate datasets and summarize findings

📩 Contact

Created by Arush Kachru


🛠 Installation

git clone https://github.com/ArushKachru/BiasGuard.git
cd BiasGuard
python3 -m venv venv
source venv/bin/activate  # macOS/Linux
# or venv\Scripts\activate for Windows
pip install -r requirements.txt


About

AI-powered bias detection for datasets and ML models — with fairness metrics, natural language reports, and explainability tools.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages