This project aims to simulate bias in data and evaluate the behavior of machine learning models trained on biased datasets. The project will further explore various methods for bias mitigation, assessing their effectiveness in improving the fairness of model predictions. By studying the impact of biased data on machine learning systems, this project helps to understand the challenges posed by bias and how to address them in a practical way.
- Simulate Bias: Introduce controlled biases into datasets to study how they affect machine learning model training and performance.
- Train Models: Train machine learning models on these biased datasets and analyze their predictions.
- Assess Impact: Understand the influence of biased data on model behavior, performance, and the fairness of predictions.
- Mitigate Bias: Apply bias mitigation techniques to the trained models and evaluate their effectiveness in improving fairness.
- Fairness Evaluation: Use fairness metrics to assess how different interventions influence the fairness of the model's predictions.