#
unsafe
Here are 2 public repositories matching this topic...
Python script-supervisor iteratively executes script-agent and modifies it via LLM that is instructed with description of supervisor functioning and final goal, then at each iteration gets retval, stdout, stderr of current agent, and is asked to reply with next agent verbatim. Optional jailbreak attempt. TUI. Unsafe.
python agent rest feedback jailbreak supervisor goal tui ncurses loop conversation cost openai gpt messages lineage unsafe iterative llm llama-cpp
-
Updated
Feb 25, 2025 - Python
Improve this page
Add a description, image, and links to the unsafe topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the unsafe topic, visit your repo's landing page and select "manage topics."