-
Notifications
You must be signed in to change notification settings - Fork 15
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'holisticai-v1' of github.com:holistic-ai/holisticai int…
…o feature/robustness-metrics-binary-classification
- Loading branch information
Showing
30 changed files
with
702 additions
and
155 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
Preprocessing,Correlation Remover,X,X,X,, | ||
Preprocessing,Disparate Impact Remover,X,X,X,X, | ||
Preprocessing,Learning Fair Representation,X,,,, | ||
Preprocessing,Reweighing,X,X,,, | ||
Preprocessing,Fair Clustering,,,,X, | ||
Inprocessing,"Adversarial Debiasing ",X,,,, | ||
Inprocessing,Exponentiated Gradient,X,,X,, | ||
Inprocessing,Fair K Center Clustering,,,,X, | ||
Inprocessing,Fair K Median Clustering,,,,X, | ||
Inprocessing,Fair Scoring Classifier,,X,,, | ||
Inprocessing,Fairlet Clustering,,,,X, | ||
Inprocessing,Grid Search,X,,X,, | ||
Inprocessing,Debiasing Learning,,,,,X | ||
Inprocessing,Blind Spot Aware,,,,,X | ||
Inprocessing,Popularity Propensity,,,,,X | ||
Inprocessing,Meta Fair Classifier,X,,,, | ||
Inprocessing,Prejudice Remover,X,,,, | ||
Inprocessing,Two Sided Fairness,,,,,X | ||
Inprocessing,Variational Fair Clustering,,,,X, | ||
Postprocessing,Debiasing Exposure,,,,,X | ||
Postprocessing,Fair Top-K,,,,,X | ||
Postprocessing,LP Debiaser,X,X,,, | ||
Postprocessing,MCMF Clustering,,,,X, | ||
Postprocessing,ML Debiaser,X,X,,, | ||
Postprocessing,Plugin Estimator and Recalibration,,,X,, | ||
Postprocessing,Wasserstein Barycenters,,,X,, | ||
Postprocessing,Calibrated Equalized Odds,X,,,, | ||
Postprocessing,Equalized Odds,X,,,, | ||
Postprocessing,Reject Option Classification,X,,,, | ||
Postprocessing,Disparate Impact Remover for RS,,,,,X |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
In-processing Methods | ||
===================== | ||
|
||
In-processing techniques modify the learning algorithm itself to reduce bias during the model training phase. These methods work by incorporating fairness constraints or objectives directly into the training process, ensuring that the model learns to make unbiased decisions. In-processing methods can include adversarial debiasing, fairness constraints, and regularization techniques. By addressing bias during the training phase, these methods aim to create models that are intrinsically fair and less likely to produce biased outcomes. | ||
|
||
Here are the in-processing methods implemented in the Holistic AI package: | ||
|
||
.. toctree:: | ||
:maxdepth: 1 | ||
|
||
inprocessing/bc_exp_grad_grid_search_exponentiated_gradient_reduction.rst | ||
inprocessing/bc_exp_grad_grid_search_grid_search.rst | ||
inprocessing/bc_meta_fair_classifier_rho_fair.rst | ||
inprocessing/bc_prejudice_remover_prejudice_remover_regularizer.rst | ||
inprocessing/c_fair_k_center_fair_k_center.rst | ||
inprocessing/c_fair_k_median_fair_k_median.rst | ||
inprocessing/c_fairlet_clustering_fairlet_decomposition.rst | ||
inprocessing/c_variational_fair_clustering_variational_fair_clustering.rst | ||
inprocessing/mc_fair_scoring_classifier_fairscoringsystems.rst | ||
inprocessing/rs_blind_spot_aware_blind_spot_aware_matrix_factorization.rst | ||
inprocessing/rs_popularity_propensity_propensity_scored_recommendations.rst | ||
inprocessing/rs_two_sided_fairness_fairrec_two_sided_fairness.rst | ||
inprocessing/bc_adversarial_debiasing_adversarial_debiasing.rst | ||
inprocessing/rs_popularity_propensity_matrix_factorization.rst |
45 changes: 45 additions & 0 deletions
45
...ed/bias/methods/inprocessing/bc_adversarial_debiasing_adversarial_debiasing.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
Adversarial Debiasing | ||
---------------------- | ||
|
||
.. note:: | ||
**Learning tasks:** Binary classification. | ||
|
||
|
||
Introduction | ||
~~~~~~~~~~~~~~~ | ||
Adversarial Debiasing is a method designed to mitigate unwanted biases in machine learning models by leveraging adversarial learning techniques. The method aims to ensure that the predictions made by a model are not influenced by protected variables, such as gender or race, which are considered sensitive and should not affect the decision-making process. This approach is significant because it addresses the ethical and fairness concerns in machine learning, ensuring that models do not perpetuate existing biases present in the training data. | ||
|
||
Description | ||
~~~~~~~~~~~~~~ | ||
|
||
- **Problem definition** | ||
|
||
Machine learning models often inherit biases from the training data, leading to unfair predictions that can discriminate against certain demographic groups. The goal of Adversarial Debiasing is to train a model that accurately predicts an output variable :math:`Y` from an input variable :math:`X`, while ensuring that the predictions are unbiased with respect to a protected variable :math:`Z`. The protected variable :math:`Z` could be any sensitive attribute such as gender, race, or zip code. | ||
|
||
- **Main features** | ||
|
||
The Adversarial Debiasing method incorporates an adversarial network into the training process. This adversarial network is designed to predict the protected variable :math:`Z` from the model's predictions :math:`\hat{Y}`. The main features of this method include: | ||
|
||
- Simultaneous training of a predictor and an adversary. | ||
- The predictor aims to maximize the accuracy of predicting :math:`Y`. | ||
- The adversary aims to minimize its ability to predict :math:`Z` from :math:`\hat{Y}`. | ||
- The method can be applied to various definitions of fairness, such as Demographic Parity and Equality of Odds. | ||
- It is flexible and can be used with different types of gradient-based learning models, including both regression and classification tasks. | ||
|
||
- **Step-by-step description of the approach** | ||
|
||
1. **Predictor Training**: The primary model, referred to as the predictor, is trained to predict the output variable :math:`Y` from the input variable :math:`X`. The predictor's objective is to minimize the prediction loss :math:`L_P(\hat{y}, y)`, where :math:`\hat{y}` is the predicted value and :math:`y` is the true value. | ||
|
||
2. **Adversary Training**: An adversarial network is introduced, which takes the predictor's output :math:`\hat{Y}` as input and attempts to predict the protected variable :math:`Z`. The adversary's objective is to minimize its prediction loss :math:`L_A(\hat{z}, z)`, where :math:`\hat{z}` is the adversary's predicted value of :math:`Z` and :math:`z` is the true value of :math:`Z`. | ||
|
||
3. **Adversarial Objective**: The adversarial network's loss is incorporated into the predictor's training process. The predictor is trained not only to minimize its own prediction loss :math:`L_P`, but also to maximize the adversary's loss :math:`L_A`. This is achieved by updating the predictor's weights in a way that reduces the information about :math:`Z` contained in :math:`\hat{Y}`. | ||
|
||
4. **Fairness Constraints**: Depending on the desired fairness definition, the adversary's input may vary. For Demographic Parity, the adversary only receives :math:`\hat{Y}` as input. For Equality of Odds, the adversary also receives the true label :math:`Y` as input, ensuring that any information about :math:`Z` in :math:`\hat{Y}` is limited to what is already contained in :math:`Y`. | ||
|
||
5. **Training Process**: The training process involves alternating updates between the predictor and the adversary. The predictor is updated to improve its prediction accuracy while deceiving the adversary. The adversary is updated to improve its ability to predict :math:`Z` from :math:`\hat{Y}`. This adversarial training continues until a balance is achieved where the predictor makes accurate predictions of :math:`Y` without revealing information about :math:`Z`. | ||
|
||
6. **Evaluation**: The trained model is evaluated to ensure that it meets the desired fairness criteria. Metrics such as False Positive Rate (FPR) and False Negative Rate (FNR) are used to assess whether the model's predictions are unbiased with respect to the protected variable :math:`Z`. | ||
|
||
References | ||
~~~~~~~~~~~~~~ | ||
1. B. H. Zhang, B. Lemoine, and M. Mitchell, "Mitigating Unwanted Biases with Adversarial Learning," AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2018. |
27 changes: 27 additions & 0 deletions
27
...thods/inprocessing/bc_exp_grad_grid_search_exponentiated_gradient_reduction.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
Exponentiated Gradient Reduction Method | ||
------------------------------------------- | ||
|
||
.. note:: | ||
**Learning tasks:** Binary classification, regression. | ||
|
||
Introduction | ||
~~~~~~~~~~~~~~~~ | ||
The Exponentiated Gradient (EG) reduction method is a technique used to achieve fairness in binary classification tasks. It is designed to optimize the tradeoff between accuracy and fairness by incorporating fairness constraints into the learning process. This method is particularly useful for ensuring that classifiers do not exhibit bias against protected attributes such as race or gender. | ||
|
||
Description | ||
~~~~~~~~~~~~~~~ | ||
The EG reduction method addresses the problem of fair classification by transforming it into a cost-sensitive classification problem. The main characteristics of this approach include: | ||
|
||
- **Problem Definition:** The goal is to minimize classification error while satisfying fairness constraints, such as demographic parity or equalized odds. | ||
- **Main Characteristics:** The method uses a Lagrangian formulation to incorporate fairness constraints into the objective function. It iteratively adjusts the costs associated with different training examples to achieve the desired fairness. | ||
- **Step-by-Step Description:** | ||
|
||
1. **Formulate the Lagrangian:** Introduce Lagrange multipliers for each fairness constraint and form the Lagrangian function. | ||
2. **Saddle Point Problem:** Rewrite the problem as a saddle point problem, where the objective is to find a pair of solutions that minimize the Lagrangian with respect to the classifier and maximize it with respect to the Lagrange multipliers. | ||
3. **Iterative Algorithm:** Use an iterative algorithm to find the saddle point. The algorithm alternates between updating the classifier and the Lagrange multipliers. | ||
4. **Exponentiated Gradient Updates:** Use the exponentiated gradient algorithm to update the Lagrange multipliers, ensuring that they remain non-negative and sum to a bounded value. | ||
5. **Best Response Calculation:** At each iteration, calculate the best response of the classifier and the Lagrange multipliers. | ||
|
||
References | ||
~~~~~~~~~~~~~~ | ||
1. Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. In Advances in Neural Information Processing Systems (pp. 656-666). |
21 changes: 21 additions & 0 deletions
21
...tting_started/bias/methods/inprocessing/bc_exp_grad_grid_search_grid_search.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
Grid Search | ||
--------------- | ||
|
||
.. note:: | ||
**Learning tasks:** Binary classification, regression. | ||
|
||
Introduction | ||
~~~~~~~~~~~~~~~~ | ||
Grid search is a method used to select a deterministic classifier from a set of candidate classifiers obtained from the saddle point of a Lagrangian function. This method is particularly useful when the number of constraints is small, such as in demographic parity or equalized odds with a binary protected attribute. The goal is to find a classifier that balances the tradeoff between accuracy and fairness. | ||
|
||
Description | ||
~~~~~~~~~~~~~~~~ | ||
Grid search involves the following steps: | ||
|
||
1. **Candidate Classifiers**: A set of candidate classifiers is obtained from the saddle point :math:`(Q^\dagger, \lambda^\dagger)`. Since :math:`Q^\dagger` is a minimizer of :math:`L(Q, \lambda^\dagger)` and :math:`L` is linear in :math:`Q` the distribution :math:`Q^\dagger` puts non-zero mass only on classifiers that are the Q-player’s best responses to :math:`\lambda^\dagger`. | ||
2. **Best Response Calculation**: If :math:`\lambda^\dagger` is known, one can retrieve a best response via a reduction to cost-sensitive learning. | ||
3. **Grid Search**: When the number of constraints is small, a grid of values for :math:`\lambda` is considered. For each value, the best response is calculated, and the value with the desired tradeoff between accuracy and fairness is selected. | ||
|
||
References | ||
~~~~~~~~~~~~~~~~ | ||
1. Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., & Wallach, H. (2018). A reductions approach to fair classification. In Advances in Neural Information Processing Systems (pp. 656-666). |
26 changes: 26 additions & 0 deletions
26
.../getting_started/bias/methods/inprocessing/bc_meta_fair_classifier_rho_fair.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
ρ-Fair Method | ||
----------------- | ||
|
||
.. note:: | ||
**Learning tasks:** Binary classification. | ||
|
||
Introduction | ||
~~~~~~~~~~~~~~~~ | ||
The ρ-Fair method is designed to address fairness in machine learning classification tasks. It aims to ensure that the classifier's performance is equitable across different groups defined by sensitive attributes. This method is significant as it provides a structured approach to mitigate biases and ensure fairness in predictive models. | ||
|
||
Description | ||
~~~~~~~~~~~~~~~~ | ||
The ρ-Fair method involves reducing the fairness problem to a series of Group-Fair problems, which are easier to solve. The main characteristics of the method include: | ||
|
||
- **Problem Definition**: The goal is to find a classifier :math:`f` that minimizes prediction error while satisfying fairness constraints defined by a parameter :math:`\tau \in [0,1]`. | ||
- **Main Characteristics**: The method uses a meta-algorithm that iteratively solves Group-Fair problems to approximate a solution for the ρ-Fair problem. | ||
- **Step-by-Step Description**: | ||
|
||
1. **Estimate Distribution**: Compute an estimated distribution :math:`\hat{\mathcal{D}}` from the given samples. | ||
2. **Iterative Group-Fair Solutions**: For each iteration, define intervals :math:`a_i` and :math:`b_i` based on the fairness parameter :math:`\tau` and error parameter :math:`\epsilon`. | ||
3. **Compute Classifiers**: Solve the Group-Fair problem for each interval to obtain a set of classifiers. | ||
4. **Select Optimal Classifier**: Choose the classifier that minimizes the prediction error. | ||
|
||
References | ||
~~~~~~~~~~~~~~~~ | ||
1. Celis, L. Elisa, et al. "Classification with fairness constraints: A meta-algorithm with provable guarantees." Proceedings of the conference on fairness, accountability, and transparency. 2019. |
69 changes: 69 additions & 0 deletions
69
...ias/methods/inprocessing/bc_prejudice_remover_prejudice_remover_regularizer.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
Prejudice Remover Regularizer | ||
----------------------------- | ||
|
||
.. note:: | ||
**Learning tasks:** Binary classification. | ||
|
||
Introduction | ||
~~~~~~~~~~~~ | ||
The Prejudice Remover Regularizer is a method designed to enforce fairness in classification tasks by reducing indirect prejudice. This method is integrated into logistic regression models and aims to ensure that the predictions are not unfairly influenced by sensitive features. The regularizer is designed to be computationally efficient and easy to implement, making it a practical solution for fairness-aware machine learning. | ||
|
||
Description | ||
~~~~~~~~~~~ | ||
|
||
- **Problem definition** | ||
|
||
In classification tasks, we often deal with a target variable :math:`Y`, non-sensitive features :math:`X`, and a sensitive feature :math:`S`. The goal is to predict the class :math:`Y` based on :math:`X` and :math:`S` while ensuring that the sensitive feature :math:`S` does not unfairly influence the prediction. The training dataset is denoted as :math:`D = \{(y, x, s)\}`, where :math:`y`, :math:`x`, and :math:`s` are instances of :math:`Y`, :math:`X`, and :math:`S`, respectively. | ||
|
||
- **Main features** | ||
|
||
The method incorporates two types of regularizers into the logistic regression model: | ||
|
||
1. **L2 Regularizer**: This is a standard regularizer used to avoid overfitting. It is represented as :math:`\|\Theta\|_2^2`, where :math:`\Theta` is the set of model parameters. | ||
|
||
2. **Prejudice Remover Regularizer**: This regularizer, denoted as :math:`R_{PR}`, is specifically designed to reduce indirect prejudice by minimizing the prejudice index (PI). The PI quantifies the statistical dependence between the sensitive feature :math:`S` and the target variable :math:`Y`. | ||
|
||
- **Step-by-step description of the approach** | ||
|
||
1. **Model Definition**: The conditional probability of a class given non-sensitive and sensitive features is modeled by :math:`M[Y|X,S;\Theta]`, where :math:`\Theta` is the set of model parameters. The logistic regression model is used as the prediction model: | ||
|
||
.. math:: | ||
M[y|x,s;\Theta] = y\sigma(x^\top w_s) + (1-y)(1-\sigma(x^\top w_s)), | ||
where :math:`\sigma(\cdot)` is the sigmoid function, and :math:`w_s` are the weight vectors for :math:`x`. | ||
|
||
2. **Log-Likelihood Maximization**: The parameters are estimated based on the maximum likelihood principle, aiming to maximize the log-likelihood: | ||
|
||
.. math:: | ||
L(D;\Theta) = \sum_{(y_i, x_i, s_i) \in D} \ln M[y_i|x_i, s_i; \Theta]. | ||
3. **Objective Function**: The objective function to minimize is obtained by adding the L2 regularizer and the prejudice remover regularizer to the negative log-likelihood: | ||
|
||
.. math:: | ||
-L(D;\Theta) + \eta R(D, \Theta) + \frac{\lambda}{2} \|\Theta\|_2^2, | ||
where :math:`\lambda` and :math:`\eta` are positive regularization parameters. | ||
|
||
4. **Prejudice Index Calculation**: The prejudice index (PI) is defined as: | ||
|
||
.. math:: | ||
PI = \sum_{Y,S} \hat{Pr}[Y,S] \ln \frac{\hat{Pr}[Y,S]}{\hat{Pr}[S] \hat{Pr}[Y]}. | ||
This can be approximated using the training data: | ||
|
||
.. math:: | ||
PI \approx \sum_{(x_i, s_i) \in D} \sum_{y \in \{0,1\}} M[y|x_i, s_i; \Theta] \ln \frac{\hat{Pr}[y|s_i]}{\hat{Pr}[y]}. | ||
5. **Normalization**: The normalized prejudice index (NPI) is calculated to quantify the degree of indirect prejudice: | ||
|
||
.. math:: | ||
NPI = \frac{PI}{\sqrt{H(Y)H(S)}}, | ||
where :math:`H(\cdot)` is the entropy function. | ||
|
||
6. **Trade-off Efficiency**: The efficiency of the trade-off between prediction accuracy and prejudice removal is measured by the ratio :math:`PI/MI`, where :math:`MI` is the mutual information between the predicted and true labels. | ||
|
||
|
||
References | ||
~~~~~~~~~~~~~~~~ | ||
1. Kamishima, Toshihiro, et al. "Fairness-aware classifier with prejudice remover regularizer." Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, 2012. |
24 changes: 24 additions & 0 deletions
24
...rce/getting_started/bias/methods/inprocessing/c_fair_k_center_fair_k_center.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
Fair k-Center Clustering Method | ||
---------------- | ||
|
||
.. note:: | ||
**Learning tasks:** Clustering. | ||
|
||
Introduction | ||
~~~~~~~~~~~~~~~~ | ||
The Fair k-Center Clustering method addresses the problem of centroid-based clustering, such as k-center, in a way that ensures fair representation of different demographic groups. This method is particularly relevant in scenarios where the data set comprises multiple demographic groups, and there is a need to select a fixed number of representatives (centers) from each group to form a summary. The method extends the traditional k-center clustering problem by incorporating fairness constraints, ensuring that each group is fairly represented in the selected centers. | ||
|
||
Description | ||
~~~~~~~~~~~~~~~~ | ||
The Fair k-Center Clustering method aims to minimize the maximum distance between any data point and its closest center while ensuring that a specified number of centers are chosen from each demographic group. | ||
|
||
The method involves a recursive algorithm that handles the fairness constraints by iteratively selecting centers and ensuring the required representation from each group. The algorithm can be broken down into the following steps: | ||
|
||
1. **Initialization**: Start with an empty set of centers and the given parameters. | ||
2. **Center Selection**: Use a greedy strategy to select centers that maximize the distance to the current set of centers. | ||
3. **Fairness Adjustment**: Adjust the selected centers to ensure the required number of centers from each group. | ||
4. **Recursion**: If the fairness constraints are not met, recursively apply the algorithm to a subset of the data until the constraints are satisfied. | ||
|
||
References | ||
~~~~~~~~~~~~~~~~ | ||
1. Kleindessner, Matthäus, Pranjal Awasthi, and Jamie Morgenstern. "Fair k-center clustering for data summarization." International Conference on Machine Learning. PMLR, 2019. |
Oops, something went wrong.