ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
-
Updated
Nov 12, 2025 - HTML
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Vibe Coding free starter kit: https://vibe-codingschool.com/
LLM | Security | Operations in one github repo with good links and pictures.
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
Application to make it easy to create my ai prompts. May include some LLM endpoints directly integrated
FinTech Prompt Engineering project (ZeTheta Internship). Uses FPF and Multi-Layer Strategy to generate a secure Robo-Advisory microservices architecture and core MPT Python code with full validation.
A proof-of-concept to see if AI browsers such as Atlas or Comet can be easily exploited.
Technical advisories on security vulnerabilities
Personal Portfolio Website
Official GitHub Repository for the paper "Decoding Latent Attack Surfaces in LLMs: Prompt Injection via HTML in Web Summarization"
PromptGuard · LLM Prompt Risk Analyzer · Project for "Neuere Methoden in der Computerlinguistik "
🚀 Detroit Developer Relations - Enrichment, Inspiration & Security Awareness
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."