Hands-on part of the Federated Learning and Privacy-Preserving ML tutorial given at VISUM 2022
-
Updated
Jul 22, 2022 - Python
Hands-on part of the Federated Learning and Privacy-Preserving ML tutorial given at VISUM 2022
A hands-on educational walkthrough of training a CelebA (Eyeglasses) image classifier with Differentially Private SGD using PyTorch and Opacus. The focus of this repo is on clarity and reproducibility through balanced subsets, deterministic preprocessing, and side-by-side baseline vs. DP training, while acknowledging real trade-offs.
This project uses Differentially Private Stochastic Gradient Descent (DP-SGD) and autoencoder-based machine learning to securely analyze smart meter data, protecting sensitive energy patterns while preserving overall trend utility.
This repo contains my dissertation project done during my Masters in Queen Mary University of London.
Rust rewrite of JAX Privacy: DP-SGD primitives, PLD/RDP accounting, matrix factorization, auditing, and adapters.
Experiments at the intersection of ML security & privacy: adversarial attacks/defenses (FGSM/PGD, adversarial training), differential privacy (DP-SGD, ε–δ), federated learning privacy (secure aggregation), and auditing (membership/model inversion). PyTorch notebooks + eval scripts.
Li X, Chen Y, Wang C, Shen C. When Deep Learning Meets Differential Privacy: Privacy, Security, and More. IEEE Network. 2021 Nov;35(6):148-55.
Add a description, image, and links to the dp-sgd topic page so that developers can more easily learn about it.
To associate your repository with the dp-sgd topic, visit your repo's landing page and select "manage topics."