Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini
-
Updated
Feb 22, 2026 - HTML
Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini
Superagent protects your AI applications against prompt injections, data leaks, and harmful outputs. Embed safety directly into your app and prove compliance to your customers.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
The Security Toolkit for LLM Interactions
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
A playground of highly experimental prompts, Jinja2 templates & scripts for machine intelligence models from OpenAI, Anthropic, DeepSeek, Meta, Mistral, Google, xAI & others. Alex Bilzerian (2022-2025).
LLM Prompt Injection Detector
a security scanner for custom LLM applications
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
💼 another CV template for your job application, yet powered by Typst and more
setup openclaw: https://remoteopenclaw.com/
Every practical and proposed defense against prompt injection.
Secure, kernel-enforced sandbox CLI and SDKs for AI agents. Capability-based isolation with secure key management, atomic rollback, cryptographic immutable audit chain of provenance. Run your agents in a zero-trust environment.
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
Self-hardening firewall for large language models
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."