-
The Hong Kong University of Science and Technology
-
23:29
(UTC +08:00)
Highlights
- Pro
🌛Privacy attack and defense
Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)
Code for the paper: Label-Only Membership Inference Attacks
Algorithms to recover input data from their gradient signal through a neural network
Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)
Instance-wise Batch Label Restoration via Gradients In Federated Learning (ICLR 2023)
This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection', accepted in NeurIPS 2022.
Breaching privacy in federated learning scenarios for vision and text
[arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"
Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)
A code implementation for model inversion attack
[CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks
LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)
Query-Efficient Data-Free Learning from Black-Box Models
[IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation
A Pytorch implementation of "Data-Free Learning of Student Networks" (ICCV 2019).
python library for invisible image watermark (blind image watermark)