[NeurIPS 2024] Official implementation of the paper “Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity"
-
Updated
Sep 29, 2025 - Python
[NeurIPS 2024] Official implementation of the paper “Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity"
An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)
ConDa is an efficient federated unlearning framework that removes a client's data from a global model without retraining or additional computational overhead. It outperforms existing methods by at least 100× in non-IID federated learning settings.
Add a description, image, and links to the federated-unlearning topic page so that developers can more easily learn about it.
To associate your repository with the federated-unlearning topic, visit your repo's landing page and select "manage topics."