An overview of my opinionated AI-assisted coding workflow and the supporting tools.
- The agent harness largely shouldn’t matter. The process should work with all of them.
- Most AI-assisted coding processes are too complex. They clutter the context window with unnecessary MCP tools, skills or content from AGENTS.md.
- A small tightly defined and focused context window produces the best results.
- LLMs do not reason, they do not think, they are not intelligent. They are simple text prediction engines. Treat them that way.
- LLMs are non-deterministic. That does not matter as long as the process provides deterministic feedback: compiler warnings as errors, linting, testing, and verifyable acceptance criteria.
- Don't get attached to the code. Be prepared to revert changes and retry with refinements to the context.
- Fast feedback helps. Provide a way for an LLM to get feedback on its work.
- Coding standards and conventions remain useful. LLMs have been trained on code that follows common ones and to copy examples in their context. When your code align with those patterns, you get better results.
- Work on small defined tasks.
- Work with small batch sizes.
- Do the simplest possible thing that meets the requirements.
- Make small atomic commits.
- Work iteratively.
- Refactor when needed.
- Integrate continuously.
- Trust, but verify.
- Leverage tools.
- Don't get attached to the code.
skills/- contains the skills I install in an agent.tools/- command line tools I use to setup a project.
- It's all about the context! To get the best results from the coding agent, manage the context; start every task with a clear context, work on small tightly defined tasks, keeping a small, tight context. Stay out of the "dumb zone", i.e. keep the context window less than 50% full.
- Provide examples. In use, LLMs cannot learn, they can follow patterns (which is sometimes referred to as in-context learning). Provide examples of the patterns you want them to follow.
- Provide positive reinforcement, don't tell it off. Don't tell it what not to do. Every instruction or input you provide goes into the context. LLMs predict based on the context, if you provide examples of what not to do, it predicts based off those examples. As an illustration, consider the instruction: "don't think of an elephant", what do you think of? A better approach if we want someone to not think of an elephant is is to instruct them to: "think of a dog".
- When writing prompts be specific, clear and precise. Avoid unnecessary words or information that may distract from the specific task.
Prefer to use AGENTS.md vs tool specific files. AGENTS.md has first-class support across Cursor, GitHub Copilot, Gemini CLI, Windsurf, Aider, Zed, Warp, RooCode, Amp and a growing list of others.
For these tool-specific files:
CLAUDE.md — Claude Code (Anthropic) JULES.md — Google Jules
or these AI IDE/editor-specific files:
.cursorrules — Cursor (older format) .cursor/rules/ — Cursor (newer format, directory-based) .github/copilot-instructions.md — GitHub Copilot .windsurf/rules — Windsurf
Write the AGENTS.md and either symlink CLAUDE.md to it or make a single line CLAUDE.md:
@AGENTS.md
Don't include anything Claude specific.
A simple instruction will achieve the same result.
Refer to AGENTS.md for all project rules.
- Greenfield Development - building new software, no legacy users to satisfy.
- Spike – Quick throwaway code to test feasibility.
- Prototyping/MVP - a mimimal version of a solution to test a hypothesis.
- Maintenance – Keeping existing systems running.
- Refactoring – Restructuring without changing behaviour.
- Performance Optimisation – Improving speed, memory usage, or efficiency.
- Rewriting – Replacing old code with new.
- Porting – Moving code to new platforms/languages.
- Debugging – Tracing issues through existing code.
- Bug Fixing – Diagnosing and solving defects.
- Testing – Writing unit tests, integration tests, or fixing test suites.
- Code Review – Evaluating others' code for quality, bugs, and standards.
- Documentation – Writing/updating technical docs, comments, READMEs.
- Infrastructure – Managing deployment and infrastructure as code.
- Legacy Code Comprehension – Understanding unfamiliar or old codebases.
- Learning - Learning a new programming language, technology or technique.