Welcome to the repository for my talk at LeHack 2024! In this talk, we explore the use of Large Language Models (LLMs) in the context of pentesting and cybersecurity training, along with their limitations.
In this talk, we dive into:
- Creating conversational agents to replace quizzes or conduct phishing exercises.
- Developing AI assistants familiar with tool documentation and audit methodologies.
- Using LLMs for code analysis and payload generation.
This repository includes:
- Code snippets used during the talk.
- Slides for the talk in the Slides directory.
You can find the slides for the talk in the Slides directory. These slides provide a detailed walkthrough of the concepts and examples discussed.
Here you'll find various code snippets illustrating the use of LLMs in cybersecurity scenarios. These snippets cover:
- Chatbot creation for phishing exercises
- AI assistants for tool documentation and audit methodologies
- Code analysis tools
- Payload generation tools
Feel free to explore and experiment with the code snippets. If you run into issues, please open an issue on this repository.
Thank you for attending my talk at LeHack 2024! Happy hacking! 🚀