-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Prerequisites
- I will write this issue in English (see our Language Policy)
- I have searched existing issues and discussions
- I have read the documentation or asked an AI coding agent with this project's GitHub URL loaded and couldn't find the answer
- This is a question (not a bug report or feature request)
Question
I'm looking for best practices on configuring agents and categories efficiently in oh-my-opencode.
Currently, I have multiple agents mapped to different models across vendors (Anthropic, OpenAI, Google), and some overlap in purpose (e.g., multiple reasoning tiers, multiple writing-related roles, and a separate ultrabrain category for heavy coding).
My main concerns are:
How should agents vs. categories be structured conceptually?
Should agents represent roles/behaviors and categories represent difficulty tiers?
Or should categories reflect task type (e.g., writing, visual-engineering) and agents reflect quality levels?
Is it better to:
Keep multiple reasoning tiers (medium/high/xhigh) explicitly defined as separate agents, or
Standardize into something like fast / standard / premium and rely more on routing logic?
When mixing vendors:
Are there recommended patterns for assigning one vendor per task domain (e.g., coding, writing, multimodal)?
Or is it better to keep similar tasks within the same vendor for consistency?
For multimodal analysis (screenshots, diagrams, UI reviews), is it recommended to always use a mid/high-tier model, or is a fast-tier multimodal model typically sufficient?
Is there a known pattern that minimizes accidental overuse of premium models while keeping output quality high?
I'm trying to balance:
Cost efficiency
Latency
Output quality
Maintainability of the configuration
Context
{
"agents": {
"librarian": { "model": "anthropic/claude-sonnet-4-5" },
"explore": { "model": "anthropic/claude-haiku-4-5" },
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
"multimodal-looker": { "model": "google/gemini-3-flash" },
"sisyphus": { "model": "openai/gpt-5.3-codex", "variant": "xhigh" },
"oracle": { "model": "openai/gpt-5.2", "variant": "high" },
"prometheus": { "model": "openai/gpt-5.2", "variant": "xhigh" },
"metis": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
"momus": { "model": "openai/gpt-5.2", "variant": "medium" },
"atlas": { "model": "anthropic/claude-sonnet-4-5" }
},
"categories": {
"visual-engineering": { "model": "google/gemini-3-pro" },
"ultrabrain": { "model": "openai/gpt-5.3-codex", "variant": "xhigh" },
"artistry": { "model": "google/gemini-3-pro", "variant": "max" },
"quick": { "model": "anthropic/claude-haiku-4-5" },
"unspecified-low": { "model": "anthropic/claude-sonnet-4-5" },
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
"writing": { "model": "google/gemini-3-flash" }
}
}There is noticeable overlap:
Multiple reasoning tiers (momus, oracle, prometheus)
Separate premium coding (sisyphus) and ultrabrain
Writing defined both as agent and category
Default tiers (unspecified-low/high) that partially duplicate agent intent
My use case:
Heavy coding (backend + frontend)
Architecture discussions
Documentation writing
Occasional multimodal UI review
I want to avoid configuration bloat while keeping flexibility.
Doctor Output (Optional)
oMoMoMoMo... Doctor
Installation
────────────────────────────────────────
✓ OpenCode Installation → 1.1.56
✓ Plugin Registration → Registered (pinned: latest)
Configuration
────────────────────────────────────────
✓ Configuration Validity → Valid JSON config
✓ Model Resolution → 10 agents, 8 categories (16 overrides), 2521 available
Authentication
────────────────────────────────────────
✓ Anthropic (Claude) Auth → Auth plugin available
✓ OpenAI (ChatGPT) Auth → Auth plugin available
✓ Google (Gemini) Auth → Auth plugin available
Dependencies
────────────────────────────────────────
✓ AST-Grep CLI → installed
⚠ AST-Grep NAPI → Not installed (optional)
⚠ Comment Checker → Not installed (optional)
Tools & Servers
────────────────────────────────────────
✓ GitHub CLI → 2.86.0 - authenticated as cyberprophet
⚠ LSP Servers → No LSP servers detected
✓ Built-in MCP Servers → 2 built-in servers enabled
○ User MCP Configuration → No user MCP configuration found
○ MCP OAuth Tokens → No OAuth tokens configured
Updates
────────────────────────────────────────
⚠ Version Status → Unable to determine current version
Summary
────────────────────────────────────────
10 passed, 0 failed, 4 warnings, 2 skipped
Total: 16 checks in 2123ms
⚠ All systems operational with warnings.Question Category
Best Practices
Additional Information
If there are any example “reference configurations” or community-recommended patterns (especially for multi-vendor setups), that would be very helpful.