A secure local sandbox to run LLM-generated code using Apple containers
-
Updated
Dec 10, 2025 - Python
A secure local sandbox to run LLM-generated code using Apple containers
Security scanner for local LLMs scanning LLM vulnerabilities including jailbreaks, prompt injection, training data leakage, and adversarial abuse
🔍 Enhance local LLM security by testing for vulnerabilities like prompt injection, model inversion, and data leakage with this robust toolkit.
Add a description, image, and links to the llmstudio topic page so that developers can more easily learn about it.
To associate your repository with the llmstudio topic, visit your repo's landing page and select "manage topics."