Reading list for adversarial perspective and robustness in deep reinforcement learning.
-
Updated
Jul 25, 2025
Reading list for adversarial perspective and robustness in deep reinforcement learning.
An open-source knowledge base of defensive countermeasures to protect AI/ML systems. Features interactive views and maps defenses to known threats from frameworks like MITRE ATLAS, MAESTRO, and OWASP.
XSSGAI is the first-ever AI-powered XSS (Cross-Site Scripting) payload generator. It leverages machine learning and deep learning to create novel payloads based on patterns from real-world XSS attacks.
Do you want to learn AI Security but don't know where to start ? Take a look at this map.
A hybrid Solidity + Python security toolkit that analyzes ERC-20 token contracts using static pattern extraction and ML-inspired scoring. Detects mint backdoors, blacklist controls, fee manipulation, trading locks, and rugpull mechanics. Outputs interpretable risk scores, labels, and structured features for deeper analysis.
Measure and Boost Backdoor Robustness
A curated collection of privacy-preserving machine learning techniques, tools, and practical evaluations. Focuses on differential privacy, federated learning, secure computation, and synthetic data generation for implementing privacy in ML workflows.
Minimal reproducible PoC of 3 ML attacks (adversarial, extraction, membership inference) on a credit scoring model. Includes pipeline, visualizations, and defenses
AAAI 2025 Tutorial on AI Safety
Code for "On the Privacy Effect of Data Enhancement via the Lens of Memorization"
Awesome-DL-Security-and-Privacy-Papers
This is a software framework that can be used for the evaluation of the robustness of Malware Detection methods with respect to adversarial attacks.
A cross-provider AI model security scanner that evaluates HuggingFace, OpenRouter, and Ollama models for malicious content, unsafe code, license issues, and known vulnerabilities. Includes automated reports and risk scoring.
Control a 5-DOF Lynxmotion robotic arm using a vision language model for object detection and task planning
AI Infrastructure Security Engineer Learning Track - ML infrastructure security, model security, and compliance
AI Security Maturity Model and assessment toolkit—secure models, data, LLM/RAG, infra, monitoring, and IR across 11 domains and 5 levels, aligned to NIST AI RMF, SAIF, and OWASP LLM Top 10.
Solutions for AI Infrastructure Security Engineer Track
Add a description, image, and links to the ml-security topic page so that developers can more easily learn about it.
To associate your repository with the ml-security topic, visit your repo's landing page and select "manage topics."