AI-powered Bias Detection Tool for datasets and ML models — provides fairness metrics, natural language reports, and explainability tools.
BiasGuard helps data scientists, developers, and product teams detect and understand bias in machine learning models and datasets. It evaluates sensitive features like gender or race and generates both technical fairness metrics and easy-to-understand natural language summaries, making AI bias transparent for all stakeholders.
BiasGuard was designed to combine Python programming, ML techniques, and explainable AI principles, giving users actionable insights while promoting responsible AI development.
- Dataset analysis (rows, columns, missing values, class distribution)
- Fairness metrics: Demographic Parity Difference, Equalized Odds Difference
- Natural language bias reports
- Works with CSV datasets
- Python – core programming language
- Scikit-learn – ML models and evaluation
- Fairlearn – fairness metrics
- Pandas / NumPy – data handling and manipulation
BiasGuard was developed to explore ethical AI and fairness in ML models. Key takeaways from the project:
- Applying fairness metrics to real-world datasets
- Communicating complex technical results in clear, actionable language
- Designing tools that promote responsible AI usage
This project also demonstrates developer advocacy skills, by bridging the gap between technical analysis and clear communication for non-technical audiences.
- Add a visualization dashboard with Streamlit
- Extend bias detection to include LLM outputs
- Build a BiasGuard AI agent that can autonomously evaluate datasets and summarize findings
Created by Arush Kachru
- GitHub
- Email: arushkachru1@gmail.com
git clone https://github.com/ArushKachru/BiasGuard.git
cd BiasGuard
python3 -m venv venv
source venv/bin/activate # macOS/Linux
# or venv\Scripts\activate for Windows
pip install -r requirements.txt