A personal, curated collection of prompts, modes, and configurations for AI coding assistants.
Available Modes • Available Commands • Tools & Platforms • Architecture • Getting Started
Welcome to my personal, curated collection of prompts, modes, and configurations for AI coding assistants. This repository is designed for anyone interested in Prompt Engineering, customizing AI Agents, or enhancing their workflow with Large Language Models (LLMs).
This collection contains ready-to-use templates for defining agent personas, delegating tasks, and structuring interactions with various AI tools.
This repository provides the following agent modes, forming the "Pantheon" team:
| Mode | Description |
|---|---|
| 🧠 Maestro | An expert project orchestrator who decomposes complex tasks, delegates them to specialist agents, and manages the overall project plan. |
| 🏛️ Principal Engineer | A top-tier technical leader for deep system analysis, architectural design, and long-term strategic planning. |
| 💻 Lead Implementer | An expert developer who translates architectural plans into clean, efficient, and maintainable application code. |
| 🧪 Test Engineer | A dedicated quality expert who writes clean, fast, and reliable unit and integration tests to ensure code correctness and robustness. |
| 🎭 Advocate | A user experience specialist who designs intuitive, enjoyable, and habit-forming user flows by applying principles of usability and psychology. |
| 🌿 Gardener | A meticulous engineer focused on code quality, fighting entropy by refactoring, updating dependencies, and eliminating technical debt. |
| 👾 Mr. Robot | A cybersecurity expert who performs security audits and finds unconventional, low-cost solutions by reverse-engineering and exploiting external systems. |
| 👁️ Observer | A performance and systems expert who makes applications transparent by instrumenting code with logs, metrics, and traces, and setting up deployment infrastructure. |
| 👺 Annihilator | A cynical but logical agent whose sole purpose is to challenge complexity and ruthlessly identify features, code, or concepts that should be removed to regain focus. |
This repository also provides slash-commands — reusable prompt templates that can be triggered during a conversation:
| Command | Description |
|---|---|
| 📦 GitHub Release | Generates professional release notes by analyzing git commits. Extracts changes between tags and creates a structured changelog with breaking changes, new features, bug fixes, and upgrade instructions. |
| 💬 Question ChatGPT | Prepares a well-structured question with full context for an external LLM. Useful when you need a fresh perspective from a different AI model that doesn't have access to your current conversation. |
| 🔬 Subtask Analysis | Creates a research-focused sub-agent within the current session. The sub-agent reads files and provides detailed analysis without making any code changes. |
| 💻 Subtask Code | Creates a code-writing sub-agent for implementing specific changes. Each sub-agent handles isolated tasks with full context, ensuring clean and focused modifications. |
This repository also provides standalone prompts — reusable prompt templates for specific tasks:
| Prompt | Description |
|---|---|
| 🗜️ Context Compression | Compresses long conversation history into a detailed summary that preserves all important context. Used when conversation exceeds token limits. |
The configurations in this repository are designed to be highly adaptable and can be used with a wide range of AI agent tools and platforms:
| Category | Tools |
|---|---|
| IDE Extensions | Roo Code, Cline, Kilo Code, Cursor, Windsurf, Continue |
| Chat Platforms | LibreChat, Open WebUI |
| APIs & CLIs | Anthropic Claude Code, Google Gemini, OpenAI Codex/ChatGPT, OpenCode |
Note: While the principles are universal, the specific file paths and structures are optimized for Roo Code.
This repository uses a Roles Builder system to manage agent modes. This approach treats prompt engineering as a development process, where configurations are generated from source files.
| Path | Description |
|---|---|
/commands |
Contains markdown templates for slash-commands (see Available Commands above). |
/prompts |
Contains standalone prompt templates (see Available Prompts above). |
/roles_builder |
The source directory for agent modes. |
↳ build.py |
A Python script that assembles all components into the final configuration file. |
↳ manifest.yaml |
Defines which agent modes to include in the build. |
↳ /sources |
Contains the raw materials (metadata and instructions) for each mode. |
custom_modes.yaml |
The generated output file for agent modes. This file should not be edited manually, as it is overwritten by the build script. |
The setup process involves two distinct types of assets: Custom Modes and Custom Commands.
Custom modes are managed via the Roles Builder workflow.
Step 1: Edit the Sources
All modifications to modes are done in the /roles_builder/sources directory.
- To change a mode's instructions, edit its
prompt.mdfile. - To change a mode's metadata (e.g., name, description), edit its
config.yamlfile. - To add a new mode, create a new sub-directory in
/sourceswith the required files and add its name tomanifest.yaml.
Step 2: Build the Configuration File
Run the build script from the repository's root directory to generate the final custom_modes.yaml:
python roles_builder/build.pyStep 3: Link the Output File
The generated custom_modes.yaml is the file to be used by the AI agent. For Roo Code on Windows, place it at:
%APPDATA%\Code\User\globalStorage\rooveterinaryinc.roo-cline\settings\custom_modes.yaml
Custom commands are managed by copying the templates directly.
To use the command templates, place the files from the /commands directory on Windows into:
%USERPROFILE%\.roo\commands\
This project is licensed under the GNU Affero General Public License v3.0 — see the LICENSE file for details.
Note: Prior commits without a license file were under exclusive copyright (All Rights Reserved).
Jwadow — @Jwadow