You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First off, I want to say how much I appreciate the addition of the clarify and analyze commands - they’re a major step forward for spec-kit and have already improved the quality of my planning and outputs.
What I’ve started doing manually is asking other LLMs (Gemini, Codex, GitHub Copilot, etc.) to review, critique, and suggest simplifications or mitigations for the implementation plans that Claude (my primary llm) generates. The process looks roughly like this:
Ask Gemini and Codex to review the generated outputs and produce review files like @review-{yyyymmddhhss}-{model}.md.
Ask Gemini to then review those independently generated reviews, cross-reference them, and verify the specs, tasks, and plans.
i do this by asking Claude to shell out and run the cli's directly.
This approach has hugely improved the quality and robustness of my plans.
Why this matters
Each base LLM has its own strengths and weaknesses. By combining them together in a structured way, I’ve been able to get more reliable, nuanced, and practical plans than by relying on a single model. In my case:
Gemini CLI provides a generous free allowance for Gmail users.
Codex CLI offers good paid capacity at ~$20/month.
It feels like a waste not to leverage this compute that’s already available to many users.
Proposed Feature
It would be amazing if clarify and analyze could support a more systematic way of incorporating multi-model reviews — either natively or via configuration. For example, spec-kit could:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
First off, I want to say how much I appreciate the addition of the
clarify
andanalyze
commands - they’re a major step forward for spec-kit and have already improved the quality of my planning and outputs.What I’ve started doing manually is asking other LLMs (Gemini, Codex, GitHub Copilot, etc.) to review, critique, and suggest simplifications or mitigations for the implementation plans that Claude (my primary llm) generates. The process looks roughly like this:
@review-{yyyymmddhhss}-{model}.md
.This approach has hugely improved the quality and robustness of my plans.
Why this matters
Each base LLM has its own strengths and weaknesses. By combining them together in a structured way, I’ve been able to get more reliable, nuanced, and practical plans than by relying on a single model. In my case:
It feels like a waste not to leverage this compute that’s already available to many users.
Proposed Feature
It would be amazing if
clarify
andanalyze
could support a more systematic way of incorporating multi-model reviews — either natively or via configuration. For example, spec-kit could:This would give users the benefit of multi-model collaboration in a clean, integrated workflow, while still preserving spec-kit’s core strengths.
Beta Was this translation helpful? Give feedback.
All reactions