Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
-
Updated
Feb 5, 2023 - Python
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
pretrained BERT model for cyber security text, learned CyberSecurity Knowledge
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
Universal Adversarial Perturbations (UAPs) for PyTorch
Input-aware Dynamic Backdoor Attack (NeurIPS 2020)
COMBAT: Alternated Training for Effective Clean-Label Backdoor Attack (AAAI 2024)
Neural Network Model Reverse Engineering Toolkit
Inspired by dynamic taint tracking, PoisonSpot uses fine-grained training provenance tracker that: (1) tags & traces the impact of every single training sample on model updates, (2) probabilistically scores suspect samples based on their linage of impact on model weights, and (3) separates the clean from the poisonous before retraining a model.
Implementation of FiST — a black-box membership inference attack framework that selectively perturbs only those non-members closely resembling members (based on cosine similarity and entropy). By amplifying subtle membership signals, FiST achieves high accuracy even against well-generalized and DP-trained models.
Add a description, image, and links to the deep-learning-security topic page so that developers can more easily learn about it.
To associate your repository with the deep-learning-security topic, visit your repo's landing page and select "manage topics."