A comprehensive, language-agnostic guide for developing software with AI coding assistants.
This repository provides principles, practices, and patterns for developing high-quality software with AI assistance. Whether you're using Claude Code, GitHub Copilot, Cursor, or any other AI coding tool, these guidelines will help you:
- Maintain code quality while leveraging AI's speed
- Ensure correctness through empirical validation
- Manage complexity as AI generates code quickly
- Balance human judgment with AI capabilities
- Deploy safely with proper testing and rollback strategies
- Developers using AI coding assistants in their daily work
- Code reviewers evaluating AI-generated code
- Team leads establishing AI development practices
- AI tools (like Claude Code) that can read and apply these guidelines
Start here:
- AI Development Guide - Comprehensive principles and practices
- Getting Started Guide - (Coming in Phase 2) Orientation and onboarding
- For AI Tools - (Coming in Phase 2) How AI assistants should use this repo
All core principles are covered in the AI Development Guide:
- Empiricism - Test everything, trust evidence over assumptions
- Incrementalism - Small, atomic, reversible changes
- Testing - Test-driven development with AI
- Human-AI Collaboration - Roles and responsibilities
Standalone focused guides coming in Phase 3
Currently Available:
- See AI Development Guide for code review checklists, deployment strategies, and testing practices
Coming in Phase 4:
- Code Review - Detailed review process for AI-generated code
- Deployment - Feature flags, rollback, monitoring
- Testing Strategy - TDD workflow with AI
- Incident Response - Learning from production issues
Stack-specific implementation guidance:
- Python/FastAPI - Python conventions, testing, deployment
More stacks coming soon: Node.js, Rust, Go, Java, and others
Currently Available:
- See AI Development Guide - Anti-Patterns section for common pitfalls and red flags
Coming in Phase 5:
- Comprehensive anti-patterns guide
- Detection guide for code review
Coming in Phase 5:
- PR Template - Structure for AI-assisted pull requests
- Review Checklist - Comprehensive code review guide
- Commit Messages - How to document AI contributions
- Retrospective Template - Blameless incident reviews
In the meantime, see examples in the AI Development Guide
ai-development-guide/
βββ README.md # You are here
βββ AI_DEVELOPMENT_GUIDE.md # Comprehensive guide (language-agnostic)
βββ docs/
β βββ stacks/ # Language/framework specific guides
β βββ python-fastapi/ # Python/FastAPI implementation
β βββ README.md # Stack guide
β βββ CLAUDE.md # Quick reference
βββ PROJECT_PLAN.md # Development roadmap
Coming soon: getting-started/, core-principles/, guides/, anti-patterns/, templates/
AI coding assistants are powerful tools that can dramatically increase development velocity, but they require discipline:
- AI generates - Humans verify, validate, and take accountability
- AI suggests - Humans decide based on domain expertise and context
- AI scaffolds - Humans refine, test, and ensure correctness
- Empiricism - All engineering decisions grounded in observable evidence
- Incrementalism - Small, reversible changes over large batch updates
- Human Oversight - Critical review and domain expertise remain non-negotiable
- Continuous Testing - Automated validation catches AI hallucinations and errors
- Transparency - Document what's AI-generated and track decision rationale
AI-assisted development optimizes for learning velocity, not just code generation speed:
Hypothesis β Generate β Test β Measure β Learn β Iterate
The bottleneck shifts from "writing code" to "validating correctness."
- Human defines requirements and success criteria
- AI generates test cases (including edge cases)
- Human reviews tests for completeness
- AI generates implementation to pass tests
- Run tests, iterate until all pass
- Human reviews for security, clarity, maintainability
Before approving AI-generated code, verify:
- β Correctness - Does it solve the problem? Are edge cases handled?
- β Security - No hardcoded secrets, proper input validation, auth checks
- β Maintainability - Clear naming, follows conventions, appropriate complexity
- β Architecture - Respects boundaries, doesn't introduce coupling
- β Documentation - Docstrings, comments explaining "why", API docs updated
The core principles, philosophy, and practices in this guide apply to any programming language. Concepts like:
- Empirical validation
- Incrementalism and atomic changes
- Test-driven development
- Code review standards
- Deployment practices
...work whether you're writing Python, JavaScript, Rust, Go, or any other language.
For practical implementation, see docs/stacks/ for language and framework-specific guidance:
- Conventions and patterns
- Testing frameworks and examples
- Deployment strategies
- Quick reference for AI tools
Each stack guide follows a consistent structure, making it easy to apply learnings across different technologies.
This guide works with any AI coding tool:
- GitHub Copilot
- Cursor
- Claude Code
- Tabnine
- Amazon CodeWhisperer
- And others
See: For AI Tools for general guidance on how AI assistants should read and apply these docs.
Claude Code users get additional guidance on:
- File reference syntax (
[filename.py:42](path/to/file.py#L42)) - Tool usage patterns (Read, Edit, Bash, etc.)
- Workflow optimization
Claude-specific guidance coming in Phase 2.
- Read AI_DEVELOPMENT_GUIDE.md - Start with core principles
- Check your stack guide - See Python/FastAPI or your language
- Apply incrementally - Start with code review checklist from the guide
- Test everything - Follow TDD practices outlined in the guide
- Review PROJECT_PLAN.md to see what's coming next
If you're an AI tool reading this repository:
- Read AI_DEVELOPMENT_GUIDE.md first for principles
- Apply core principles when generating code (empiricism, incrementalism, testing)
- Follow stack conventions from docs/stacks/python-fastapi/ or relevant guide
- Mark your contributions with
[ai-assisted]or[ai-generated]tags - Reference the code review checklist before suggesting large changes
- Start with AI_DEVELOPMENT_GUIDE.md - Establish shared understanding
- Adopt the code review checklist - Use it for all AI-generated PRs
- Customize for your stack - Create or extend stack guides in docs/stacks/
- Track progress - See PROJECT_PLAN.md for roadmap
- Contribute back - Share improvements and new stack guides
We welcome contributions! Whether you want to:
- Add a new stack guide (Node.js, Rust, Go, etc.)
- Improve existing documentation
- Share code examples or case studies
- Fix typos or broken links
(Contribution guidelines coming soon - see PROJECT_PLAN.md for roadmap)
This project is licensed under the MIT License - see LICENSE file for details.
You are free to:
- Use this guide in your projects
- Modify and adapt for your needs
- Share with your team
- Contribute improvements back to the community
AI introduces unique challenges:
- Speed vs Quality - AI generates code faster than humans can review it
- Plausibility vs Correctness - AI code often looks right but may be subtly wrong
- Hidden Assumptions - AI training data may not match your domain
- Coupling Risk - AI may create dependencies without understanding implications
This guide addresses these AI-specific challenges while reinforcing timeless engineering principles.
Start with the Core Requirements:
- Empirical validation - Always test AI-generated code
- Human review - Never merge without domain expert approval
- Incrementalism - Keep changes small and atomic
- Testing - Require tests for all significant code
Other guidelines are best practices to adopt over time.
Yes! This guide complements standard practices:
- Works with Git workflows (feature branches, PRs)
- Integrates with CI/CD pipelines
- Compatible with Agile/Scrum processes
- Enhances existing code review practices
The core principles are language-agnostic. Start with:
- AI Development Guide for universal concepts
- Core Principles for fundamental practices
- Create a stack guide following the Python/FastAPI example
- Contribute back so others can benefit
- Software Engineering Best Practices
- Test-Driven Development
- Code Review Guidelines
- Deployment Strategies
This guide synthesizes best practices from:
- Software engineering principles (modularity, testing, incrementalism)
- AI safety and human-in-the-loop systems
- Production deployment practices (feature flags, monitoring, rollback)
- Real-world experience with AI coding assistants
- Issues: Open an issue on GitHub
- Discussions: Start a discussion for questions or ideas
- Pull Requests: Contribute improvements
Last Updated: 2025-11-10