AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
-
Updated
Dec 2, 2025 - TypeScript
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
LLM Prompt Injection Detector
Spell whisperer is a prompt injection challenges platform based on Grok API.
Build production ready apps for GPT using Node.js & TypeScript
A TypeScript library providing a set of guards for LLM (Large Language Model) applications
AI security and prompt injection payload toolkit
🚀 Unofficial Node.js SDK for Prompt Security's Protection API.
Experiment with multifactor analysis of different prompting strategies.
Protect your AI from Prompt Injection
A hands-on AI security workshop that hacks and protects AI agents using MCP servers, featuring real vulnerability demos and prompt injection defense.
Protect AI automations from prompt injection attacks. One API call stops credential leaks, jailbreaks, and system prompt extraction.
History Poison Lab: Vulnerable LLM implementation demonstrating Chat History Poisoning attacks. Learn how attackers manipulate chat context and explore mitigation strategies for secure LLM applications.
Adversarial Vision is a research-backed interactive playground exploring how pixels can become prompt injections. It demonstrates how hidden text, subtle contrast shifts, and adversarial visual cues can manipulate multimodal AI models like ChatGPT, Perplexity, or Gemini when they “see” images.
ChatGPT Adversarial Attack for The Pitt Challenge 2023
🛠️ Craft and organize high-quality prompts easily with PromptCrafter, a TypeScript-first web app for streamlined AI workflows.
Context hygiene & risk adjudication for LLM pipelines: secrets, PII, prompt-injection, policy redaction & tokenization.
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."