Skip to content

A curated list of resources dedicated to reinforcement learning applied to cyber security.

License

Notifications You must be signed in to change notification settings

zarah93/awesome-rl-for-cybersecurity

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Reinforcement Learning
for Cyber Security

A curated list of resources dedicated to reinforcement learning applied to cyber security. Note that the list includes only work that uses reinforcement learning, general machine learning methods applied to cyber security are not included in this list.

For other related curated lists, see :

Table of Contents

Environments

Cyberwheel

Cyberwheel: A Reinforcement Learning Simulation Environment
  • Cyberwheel is a Reinforcement Learning (RL) simulation environment built for training and evaluating autonomous cyber defense models on simulated networks. It was built with modularity in mind, to allow users to build on top of it to fit their needs, supporting various robust configuration files to build networks, services, host types, defensive agents, and more. Cyberwheel is being developed by Oak Ridge National Lab (ORNL).
  • Paper: (2024) Towards a High Fidelity Training Environment for Autonomous Cyber Defense Agents

Pentesting Training Framework for Reinforcement Learning Agents (PenGym)

PenGym: Pentesting Training Framework for Reinforcement Learning Agents
  • PenGym is a framework for creating and managing realistic environments used for the training of Reinforcement Learning (RL) agents for penetration testing purposes. PenGym uses the same API with the Gymnasium fork of the OpenAI Gym library, thus making it possible to employ PenGym with all the RL agents that follow those specifications. PenGym is being developed by Japan Advanced Institute of Science and Technology (JAIST) in collaboration with KDDI Research, Inc.
  • Paper: (2024) PenGym: Pentesting Training Framework for Reinforcement Learning Agents

The ARCD Primary-level AI Training Environment (PrimAITE)

The ARCD Primary-level AI Training Environment (PrimAITE)
  • The ARCD Primary-level AI Training Environment (PrimAITE) provides an effective simulation capability for the purposes of training and evaluating AI in a cyber-defensive role.

CSLE: The Cyber Security Learning Environment

CSLE: The Cyber Security Learning Environment
  • CSLE is a platform for evaluating and developing reinforcement learning agents for control problems in cyber security. It can be considered as a cyber range specifically designed for reinforcement learning agents. Everything from network emulation, to simulation and implementation of network commands have been co-designed to provide an environment where it is possible to train and evaluate reinforcement learning agents on practical problems in cyber security.
  • Paper: (2022) Intrusion Prevention Through Optimal Stopping

AutoPentest-DRL

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning
  • AutoPentest-DRL is an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. AutoPentest-DRL can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit. This framework is intended for educational purposes, so that users can study the penetration testing attack mechanisms. AutoPentest-DRL is being developed by the Cyber Range Organization and Design (CROND) NEC-endowed chair at the Japan Advanced Institute of Science and Technology (JAIST) in Ishikawa,Japan.

NASimEmu

NASimEmu
  • NASimEmu is a framework for training deep RL agents in offensive penetration-testing scenarios. It includes both a simulator and an emulator so that a simulation-trained agent can be seamlessly deployed in emulation. Additionally, it includes a random generator that can create scenario instances varying in network configuration and size while fixing certain features, such as exploits and privilege escalations. Furthermore, agents can be trained and tested in multiple scenarios simultaneously.

    Paper: (2023) NASimEmu: Network Attack Simulator & Emulator for Training Agents Generalizing to Novel Scenarios
    Framework: NASimEmu
    Implemented agents: NASimEmu-agents

gym-idsgame

gym-idsgame

CyberBattleSim (Microsoft)

CyberBattleSim
  • CyberBattleSim is an experimentation research platform to investigate the interaction of automated agents operating in a simulated abstract enterprise network environment. The simulation provides a high-level abstraction of computer networks and cyber security concepts. Its Python-based Open AI Gym interface allows for the training of automated agents using reinforcement learning algorithms. Blogpost: (2021) Gamifying machine learning for stronger security and AI models

gym-malware

gym-malware

malware-rl

malware-rl

gym-flipit

gym-flipit

gym-threat-defense

gym-threat-defense

gym-nasim

gym-nasim

gym-optimal-intrusion-response

gym-optimal-intrusion-response

sql_env

sql_env

cage-challenge

cage-challenge-1
  • The first Cyber Autonomos Gym for Experimentation (CAGE) challenge environment released at the 1st International Workshop on Adaptive Cyber Defense held as part of the 2021 International Joint Conference on Artificial Intelligence (IJCAI).
cage-challenge-2
cage-challenge-3
  • The third Cyber Autonomous Gym for Experimentation (CAGE) challenge environment.
cage-challenge-4
  • The fourth Cyber Autonomous Gym for Experimentation (CAGE) challenge environment.

ATMoS

ATMoS

MAB-Malware

MAB-malware

ASAP

Autonomous Security Analysis and Penetration Testing framework (ASAP)

Yawning Titan

Yawning Titan

Cyborg

Cyborg
  • Cyborg is a gym for autonomous cyberg operations research that is driven by the need to efficiently support reinforcement learning to train adversarial decision-making models through simulation and emulation. This is a variation of the environments used by cage-challenge above.

    Paper: (2021) CybORG: A Gym for the Development of Autonomous Cyber Agents

FARLAND

FARLAND (github repository missing)
  • FARLAND is a framework for advanced Reinforcement Learning for autonomous network defense, that uniquely enables the design of network environments to gradually increase the complexity of models, providing a path for autonomous agents to increase their performance from apprentice to superhuman level, in the task of reconfiguring networks to mitigate cyberattacks.

    Paper: (2021) Network Environment Design for Autonomous Cyberdefense

SecureAI

SecureAI

CYST

CYST

CLAP

CLAP: Curiosity-Driven Reinforcment Learning Automatic Penetration Testing Agent

CyGIL

CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network Systems
  • CyGIL is an experimental testbed of an emulated RL training environment for network cyber operations. CyGIL uses a stateless environment architecture and incorporates the MITRE ATT&CK framework to establish a high fidelity training environment, while presenting a sufficiently abstracted interface to enable RL training. Its comprehensive action space and flexible game design allow the agent training to focus on particular advanced persistent threat (APT) profiles, and to incorporate a broad range of potential threats and vulnerabilities. By striking a balance between fidelity and simplicity, it aims to leverage state of the art RL algorithms for application to real-world cyber defence.

    Paper: (2021) CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network Systems

BRAWL

BRAWL
  • BRAWL seeks to create a compromise by creating a system to automatically create an enterprise network inside a cloud environment. OpenStack is the only currently supported environment, but it is being designed in such a way as to easily support other cloud environments in the future.

DETERLAB

DeterLab: Cyber-Defense Technology Experimental Research Laboratory
  • Since 2004, the DETER Cybersecurity Testbed Project has worked to create the necessary infrastructure - facilities, tools, and processes-to provide a national resource for experimentation in cyber security. The next generation of DETER envisions several conceptual advances in testbed design and experimental research methodology, targeting improved experimental validity, enhanced usability, and increased size, complexity, and diversity of experiments.

    Paper: (2010) The DETER project: Advancing the science of cyber security experimentation and test

EmuLab

Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command.

Vine

VINE: A Cyber Emulation Environment for MTD Experimentation

CRATE

CRATE Exercise Control – A cyber defense exercise management and support tool

GALAXY

Galaxy: A Network Emulation Framework for Cybersecurity tool

Papers

Surveys

Demonstration papers

Position papers

Regular Papers

PhD Theses

Master Theses

Bachelor Theses

Posters

Books

Blogposts

Talks

Miscellaneous

Contribute

Contributions are very welcome. Please use Github issues and pull requests.

List of Contributors

Thanks for all your contributions and keeping this project up-to-date.

License

LICENSE

Creative Commons

(C) 2021-2024

About

A curated list of resources dedicated to reinforcement learning applied to cyber security.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published