A comprehensive collection of patterns based on my experience for building software with AI assistance, organized by implementation maturity and development lifecycle phases. These patterns are subject to change as the field evolves.
This repository provides a structured approach to AI-assisted development through three pattern categories:
- Foundation Patterns - Essential patterns for team readiness and basic AI integration
- Development Patterns - Daily practice patterns for AI-assisted coding workflows
- Operations Patterns - CI/CD, security, and production management with AI
- Experimental Patterns - Advanced and experimental patterns under active development and/or consideration.
Important: These phases represent a learning progression for teams new to AI development, not a waterfall approach. Teams with existing DevOps/security expertise should implement patterns continuously across all phases from day one, following a "continuous everything" model.
graph TD
subgraph "Phase 1: Foundation (Weeks 1-2)"
A[AI Readiness Assessment] --> B[Rules as Code]
B --> C[AI Security Sandbox]
C --> D[AI Developer Lifecycle]
A --> E[AI Issue Generation]
D --> F[AI Tool Integration]
end
subgraph "Phase 2: Development (Weeks 3-4)"
D --> G[Specification Driven Development]
D --> H[AI Plan-First Development]
G --> I[Progressive AI Enhancement]
H --> I
I --> J[AI Choice Generation]
G --> K[Atomic Task Decomposition]
K --> L[Parallelized AI Coding Agents]
end
subgraph "Phase 3: Operations (Weeks 5-6)"
C --> M[Policy-as-Code Generation]
M --> N[Security Scanning Orchestration]
L --> O[Performance Baseline Management]
D --> P[AI-Driven Traceability]
end
Continuous Implementation Note: Security patterns (AI Security Sandbox, AI Security & Compliance) and deployment patterns should be implemented continuously throughout development, not delayed until specific phases. The dependencies shown represent learning prerequisites, not deployment gates.
Pattern | Maturity | Type | Description | Dependencies |
---|---|---|---|---|
AI Readiness Assessment | Beginner | Foundation | Systematic evaluation of codebase and team readiness for AI integration | None |
Rules as Code | Beginner | Foundation | Version and maintain AI coding standards as explicit configuration files | AI Readiness Assessment |
AI Security Sandbox | Beginner | Foundation | Run AI tools in isolated environments without access to secrets or sensitive data | Rules as Code |
AI Developer Lifecycle | Intermediate | Workflow | Structured 9-stage process from problem definition through deployment with AI assistance | Rules as Code, AI Security Sandbox |
AI Tool Integration | Intermediate | Foundation | Connect AI systems to external data sources, APIs, and tools for enhanced capabilities beyond prompt-only interactions | AI Security Sandbox, AI Developer Lifecycle |
AI Issue Generation | Intermediate | Foundation | Generate Kanban-optimized work items (4-8 hours max) from requirements using AI to ensure continuous flow with clear acceptance criteria and dependencies | AI Readiness Assessment |
Specification Driven Development | Intermediate | Development | Use executable specifications to guide AI code generation with clear acceptance criteria before implementation | AI Developer Lifecycle |
AI Plan-First Development | Beginner | Development | Generate explicit implementation plans before writing code to improve quality, reduce iterations, and enable better collaboration | None |
Progressive AI Enhancement | Beginner | Development | Build complex features through small, deployable iterations rather than big-bang generation | None |
AI Choice Generation | Intermediate | Development | Generate multiple implementation options for exploration and comparison rather than accepting first AI solution | Progressive AI Enhancement |
Atomic Task Decomposition | Intermediate | Development | Break complex features into atomic, independently implementable tasks for parallel AI agent execution | Progressive AI Enhancement |
Parallelized AI Coding Agents | Advanced | Development | Run multiple AI agents concurrently on isolated tasks or environments to maximize development speed and exploration | Atomic Task Decomposition |
AI Context Persistence | Intermediate | Development | Manage AI context as a finite resource through structured memory schemas, prompt pattern capture, and session continuity protocols | Rules as Code |
Constraint-Based AI Development | Beginner | Development | Give AI specific constraints to prevent over-engineering and ensure focused solutions | None |
Observable AI Development | Intermediate | Development | Strategic logging and debugging that makes system behavior visible to AI | AI Developer Lifecycle |
AI-Driven Refactoring | Intermediate | Development | Systematic code improvement using AI to detect and resolve code smells with measurable quality metrics | Rules as Code |
AI-Driven Architecture Design | Intermediate | Development | Apply architectural frameworks (DDD, Well-Architected, 12-Factor) using AI to ensure sound system design | AI Developer Lifecycle, Rules as Code |
AI-Driven Traceability | Intermediate | Development | Maintain automated links between requirements, specifications, tests, implementation, and documentation using AI | AI Developer Lifecycle |
Security & Compliance | Operations | Category containing security and compliance patterns | ||
Policy-as-Code Generation | Advanced | Operations | Transform compliance requirements into executable Cedar/OPA policy files with AI assistance | AI Security Sandbox |
Security Scanning Orchestration | Intermediate | Workflow | Aggregate multiple security tools and use AI to summarize findings for actionable insights | AI Security Sandbox |
Deployment Automation | Operations | Category containing deployment and pipeline patterns | ||
Monitoring & Maintenance | Operations | Category containing monitoring and maintenance patterns | ||
Performance Baseline Management | Advanced | Operations | Establish intelligent performance baselines and configure monitoring thresholds automatically | AI Tool Integration |
Patterns are classified by implementation complexity and prerequisite knowledge:
Beginner: Basic AI tool usage with minimal setup required
- Prerequisites: Basic programming skills, access to AI tools
- Complexity: Single tool usage, straightforward prompts
- Examples: Simple code generation, basic constraint setting
Intermediate: Multi-tool coordination and process integration
- Prerequisites: Development workflow experience, team coordination
- Complexity: Multiple tools, orchestration patterns, quality gates
- Examples: Testing strategies, parallel workflows, choice generation
Advanced: Complex systems with enterprise concerns
- Prerequisites: Architecture experience, security/compliance knowledge
- Complexity: Multi-agent systems, advanced safety, compliance automation
- Examples: Enterprise security, compliance automation, chaos engineering
The patterns use different task sizing approaches based on their purpose and context:
graph TD
A[Feature Request] --> B[AI Issue Generation]
B --> C[4-8 Hour Work Items]
C --> D{Parallel Implementation?}
D -->|Yes| E[Atomic Task Decomposition]
D -->|No| F[Progressive Enhancement]
E --> G[1-2 Hour Atomic Tasks]
F --> H[Daily Deployment Cycles]
G --> I[Parallel Agent Execution]
H --> J[Sequential Enhancement]
C --> K[Standard Kanban Flow]
Task Sizing Hierarchy:
- AI Issue Generation (4-8 hours): Standard Kanban work items for continuous flow and rapid feedback
- Atomic Task Decomposition (1-2 hours): Ultra-small tasks for parallel agent execution without conflicts
- Progressive AI Enhancement (Daily cycles): Deployment-focused iterations that may contain multiple work items
When to Use Each Approach:
- Use AI Issue Generation for standard team development with human developers
- Use Atomic Task Decomposition when implementing with parallel AI agents
- Use Progressive Enhancement when prioritizing rapid market feedback over task granularity
Pattern Differentiation:
- AI Issue Generation: Creates Kanban work items (4-8 hours) for human team workflows
- Atomic Task Decomposition: Creates ultra-small tasks (1-2 hours) for parallel AI agents
- Progressive AI Enhancement: Creates deployment cycles (daily) focused on user feedback
Choose the right patterns based on your team's context, project requirements, and AI development maturity:
graph TD
A[Starting AI Development] --> B{Team AI Experience?}
B -->|New to AI| C[Start with Foundation Patterns]
B -->|Some Experience| D[Focus on Development Patterns]
B -->|Advanced| E[Implement Operations Patterns]
C --> F[AI Readiness Assessment]
F --> G[Rules as Code]
G --> H[AI Security Sandbox]
H --> I{Need Structured Development?}
I -->|Yes| J[AI Developer Lifecycle]
I -->|No| K[AI Plan-First Development]
K --> L[Progressive AI Enhancement]
D --> M{Multiple Developers/Agents?}
M -->|Yes| N[Parallelized AI Coding Agents]
M -->|No| O[Specification Driven Development]
N --> P[Atomic Task Decomposition]
O --> Q[AI-Driven Traceability]
E --> R{Enterprise Requirements?}
R -->|Compliance| S[Policy-as-Code Generation]
R -->|Scale| T[Performance Baseline Management]
R -->|Quality| U[Technical Debt Forecasting]
For New Teams (First 2 weeks):
- AI Readiness Assessment - Evaluate current state
- Rules as Code - Establish consistent standards
- AI Security Sandbox - Ensure safe experimentation
- AI Plan-First Development - Learn structured planning approaches
- Progressive AI Enhancement - Start with simple iterations
For Development Teams (Weeks 3-8):
- AI Developer Lifecycle - Structured development process
- Specification Driven Development - Quality-focused development
- AI Issue Generation - Organized work breakdown
- Comprehensive AI Testing Strategy - Quality assurance
For Parallel Implementation:
- Atomic Task Decomposition - Ultra-small independent tasks
- AI Workflow Orchestration - Agent coordination
- AI Review Automation - Automated integration
- AI Security Sandbox - Enhanced with parallel safety
For Enterprise/Production (Month 2+):
- Policy-as-Code Generation - Compliance automation
- Security Scanning Orchestration - Integrated security
- Performance Baseline Management - Production monitoring
- Technical Debt Forecasting - Proactive maintenance
MVP/Startup Projects:
- Primary: Progressive AI Enhancement, AI Choice Generation
- Secondary: AI Security Sandbox, Constraint-Based AI Development
- Avoid: Complex orchestration patterns until scale demands
Enterprise Applications:
- Primary: AI Developer Lifecycle, Policy-as-Code Generation
- Secondary: AI-Driven Traceability, Security Scanning Orchestration
- Essential: All foundation patterns before development patterns
Research/Experimental Projects:
- Primary: AI Choice Generation, Observable AI Development
- Secondary: AI Context Persistence, Context Window Optimization
- Focus: Learning and exploration over production readiness
High-Scale Production:
- Primary: Parallelized AI Coding Agents, Performance Baseline Management
- Secondary: Chaos Engineering Scenarios, Incident Response Automation
- Critical: All security and monitoring patterns
Solo Teams:
- Focus on Progressive AI Enhancement and AI Choice Generation
- Add Observable AI Development for debugging
- Skip parallel orchestration patterns
Two-Pizza Teams (small, autonomous teams):
- Implement AI Issue Generation for coordination
- Use Specification Driven Development for quality
- Consider AI Tool Integration for role clarity
- Full AI Developer Lifecycle implementation
- Parallelized AI Coding Agents for complex features
- AI-Driven Traceability for quality gates
Multi Two-Pizza Team Organizations:
- Atomic Task Decomposition for parallel work across teams
- AI-Driven Traceability for coordination at scale
- All Operations Patterns for organizational management
Cloud-Native Applications:
- Emphasize Policy-as-Code Generation and Compliance Evidence Automation
- Implement Drift Detection & Remediation for infrastructure
- Use AI-Guided Blue-Green Deployment for safe releases
On-Premise Systems:
- Focus on AI Security Sandbox with network isolation
- Implement AI Context Persistence for institutional knowledge
- Use Technical Debt Forecasting for maintenance planning
Microservices Architecture:
- Parallelized AI Coding Agents for service coordination
- Observable AI Development across service boundaries
- Performance Baseline Management for distributed monitoring
Monolithic Applications:
- Progressive AI Enhancement for gradual modernization
- AI-Driven Refactoring for code quality improvement
- Constraint-Based AI Development to prevent over-engineering
Foundation patterns establish the essential infrastructure and team readiness required for successful AI-assisted development. These patterns must be implemented first as they enable all subsequent patterns.
Maturity: Beginner
Description: Systematic evaluation of codebase and team readiness for AI-assisted development before implementing AI patterns.
Related Patterns: Rules as Code, AI Issue Generation
Assessment Framework
graph TD
A[Codebase Assessment] --> B[Team Assessment]
B --> C[Infrastructure Assessment]
C --> D[Readiness Score]
D --> E[Implementation Plan]
Codebase Readiness Checklist
## Code Quality Prerequisites
â–ˇ Consistent code formatting and style guide
â–ˇ Comprehensive test coverage (>80% for critical paths)
â–ˇ Clear separation of concerns and modular architecture
â–ˇ Documented APIs and interfaces
â–ˇ Version-controlled configuration and secrets management
## Documentation Standards
â–ˇ README with setup and development instructions
â–ˇ API documentation (OpenAPI/Swagger)
â–ˇ Architecture decision records (ADRs)
â–ˇ Coding standards and conventions documented
â–ˇ Deployment and operational procedures
Anti-pattern: Rushing Into AI Starting AI adoption without proper assessment leads to inconsistent practices, security vulnerabilities, and team frustration.
Maturity: Beginner
Description: Version and maintain AI coding standards as explicit configuration files that persist across sessions and team members.
Related Patterns: AI Developer Lifecycle, AI Context Persistence
Standardized Project Structure
project/
├── .ai/ # AI configuration directory
│ ├── rules/ # Modular rule sets
│ │ ├── security.md # Security standards
│ │ ├── testing.md # Testing requirements
│ │ ├── style.md # Code style guide
│ │ └── architecture.md # Architectural patterns
│ ├── prompts/ # Reusable prompt templates
│ │ ├── implementation.md # Implementation prompts
│ │ ├── review.md # Code review prompts
│ │ └── testing.md # Test generation prompts
│ └── knowledge/ # Captured patterns and gotchas
│ ├── successful.md # Proven successful patterns
│ └── failures.md # Known failure patterns
├── .cursorrules # Cursor IDE configuration
├── CLAUDE.md # Claude Code session context
└── .windsurf/ # Windsurf configuration
└── rules.md
Complete Example: See examples/rules-as-code/ for:
- Comprehensive development workflow rules and standards
- Pipeline automation and CI/CD rules
- Code quality standards and enforcement guidelines
- Claude Code configuration for rules-as-code implementation
Anti-pattern: Context Drift Each developer maintains their own prompts and preferences, leading to inconsistent code across the team.
Maturity: Beginner
Description: Run AI tools in isolated environments without access to secrets or sensitive data to prevent credential leaks and maintain security compliance.
Related Patterns: Security & Compliance Patterns, Rules as Code
Core Security Implementation
# Basic AI isolation with complete network isolation
services:
ai-development:
network_mode: none # Zero network access
cap_drop: [ALL] # No system privileges
volumes:
- ./src:/workspace/src:ro # Read-only source code
# DO NOT mount ~/.aws, .env, secrets/, etc.
Complete Example: See examples/ai-security-sandbox/ for:
- Complete Docker isolation configurations for single and multi-agent setups
- Resource locking and emergency shutdown procedures
- Security monitoring and violation detection
- Multi-agent coordination with conflict resolution
Production Implementation: Amazon Bedrock AgentCore provides enterprise-grade implementation of these security controls:
- Isolated runtimes: microVM session isolation for complete workload separation
- Identity layer: IAM integration for fine-grained access control
- Secure tool gateway: MCP-compatible interface with controlled tool access
- Code execution sandbox: Safe environment for AI-generated code execution
- Controlled web browsing: Network access limited to approved domains
- Observability & guardrails: CloudWatch/CloudTrail logging with Bedrock Guardrails integration
Anti-pattern: Unrestricted Access Allowing AI tools full system access risks credential leaks, data breaches, and security compliance violations.
Anti-pattern: Shared Agent Workspaces Allowing multiple parallel agents to write to the same directories creates race conditions, file conflicts, and unpredictable behavior that can corrupt the development environment.
Maturity: Intermediate
Description: Structured 9-stage process from problem definition through deployment with AI assistance.
Related Patterns: Rules as Code, Specification Driven Development, AI Plan-First Development, Atomic Task Decomposition, Observable AI Development
Workflow Interaction Sequence
sequenceDiagram
participant D as Developer
participant AI as AI Assistant
participant S as System/CI
participant T as Tests
participant M as Monitoring
Note over D,M: Stage 1-3: Problem → Plan → Requirements
D->>AI: Problem Definition (e.g., JWT Authentication)
AI->>D: Technical Architecture Plan
D->>AI: Requirements Clarification
AI->>D: API Specs + Kanban Tasks + Security Requirements
Note over D,M: Stage 4-5: Issues → Specifications
D->>AI: Generate Executable Tests
AI->>T: Gherkin Scenarios + API Tests + Security Tests
T->>D: Test Suite Ready (Performance Criteria: <200ms)
Note over D,M: Stage 6: Implementation
D->>AI: Implement Following Specifications
AI->>S: Code + Tests + Error Handling + Logging
S->>D: Implementation Results
Note over D,M: Stage 7-9: Testing → Deployment → Monitoring
D->>S: Run All Tests
S->>D: Test Results + Security Scan + Performance Benchmark
alt Tests Pass
S->>S: Deploy to Production
S->>M: Setup Monitoring Alerts
M->>D: Deployment Complete + Monitoring Active
else Tests Fail
S->>D: Failure Report
D->>AI: Fix Issues
AI->>S: Updated Implementation
end
Note over D,M: Continuous Monitoring
M->>D: Performance Alerts + Security Events
Core Workflow Implementation
# Stage 1-3: Problem → Plan → Requirements
ai "Analyze request → Generate architecture, tasks, API specs"
# Stage 4-5: Issues → Specifications
ai "Generate executable tests → Gherkin scenarios, API tests, security tests"
# Stage 6: Implementation
ai "Implement following specifications → Use tests as guide, security best practices"
# Stage 7-9: Testing → Deployment → Monitoring
ai "Complete QA → Run tests, security scan, deploy, monitor"
Complete Implementation: See examples/ai-development-lifecycle/ for full 9-stage workflow scripts, detailed prompts for each stage, enhanced implementation techniques (Five-Try Rule, markdown iteration, function decomposition), and integration with CI/CD pipelines.
Anti-pattern: Ad-Hoc AI Development Jumping straight to coding with AI without proper planning, requirements, or testing strategy. Also avoid continuing with the same AI approach after 3-4 failures without decomposing the problem or changing strategy.
Maturity: Intermediate
Description: Connect AI systems to external data sources, APIs, and tools for enhanced capabilities beyond prompt-only interactions.
Related Patterns: AI Security Sandbox, AI Developer Lifecycle, Observable AI Development
Core Concept
Modern AI development requires more than chat-based interactions. AI systems become significantly more capable when connected to real-world data sources and tools. This pattern demonstrates the architectural shift from isolated prompt-only AI to tool-augmented AI systems.
Implementation Overview
# Core tool-augmented AI system with security controls
class ToolAugmentedAI:
def __init__(self, config_path: str = ".ai/tools.json"):
self.available_tools = {
"database_query": self._query_database, # Read-only SQL queries
"file_operations": self._file_operations, # Controlled file access
"api_requests": self._api_requests, # Allowlisted HTTP requests
"system_info": self._system_info # Safe system information
}
def execute_with_tools(self, ai_request: str, tool_calls: list) -> dict:
"""Execute AI request with secure tool access"""
# Process tool calls with security validation
# Return structured results with error handling
Tool Categories & Security
- Database Access: Read-only queries with operation whitelisting (
SELECT
,WITH
only) - File Operations: Path-restricted read/write within configured directories
- API Integration: HTTP requests limited to allowlisted domains with timeouts
- System Information: Safe environment data without sensitive details
Configuration Example
{
"allowed_apis": ["api.github.com", "api.openweathermap.org"],
"file_access_paths": ["./data/", "./logs/", "./generated/"],
"max_query_results": 100,
"security": {
"read_only_database": true,
"api_rate_limits": true,
"file_size_limits": "10MB"
}
}
Model Context Protocol (MCP) Integration
This pattern can be implemented using Anthropic's Model Context Protocol (MCP) for standardized tool integration across AI systems:
{
"mcp_servers": {
"filesystem": {
"command": "npx",
"args": ["@modelcontextprotocol/server-filesystem", "./data"]
},
"sqlite": {
"command": "npx",
"args": ["@modelcontextprotocol/server-sqlite", "app_data.db"]
}
}
}
What Tool Integration Enables
- Real-time data access: AI queries current database state, not training data
- File system interaction: Read logs, write generated code, manage project files
- API integration: Fetch live data from external services and APIs
- System awareness: Access to current environment state and configuration
- Enhanced context: AI decisions based on actual system state, not assumptions
Complete Implementation
See examples/ai-tool-integration/ for:
- Full Python implementation with security controls
- Configuration examples and MCP integration
- Usage patterns and deployment guidelines
- Integration with AI Security Sandbox
Anti-pattern: Prompt-Only AI Development Attempting to solve complex data analysis, system integration, or real-time problems using only natural language prompts without providing AI access to actual data sources, APIs, or system tools. This leads to hallucinated responses, outdated information, and inability to interact with real systems.
Maturity: Intermediate
Description: Generate Kanban-optimized work items (4-8 hours max) from requirements using AI to ensure continuous flow with clear acceptance criteria and dependencies.
Related Patterns: AI Readiness Assessment, Specification Driven Development
Issue Generation Framework
graph TD
A[Requirements Document] --> B[AI Feature Analysis]
B --> C[Work Item Splitting]
C --> D{<8 hours?}
D -->|No| E[Split Further]
E --> C
D -->|Yes| F[Story Generation]
F --> G[Acceptance Criteria]
G --> H[Cycle Time Target]
H --> I[Dependency Mapping]
I --> J[Kanban Card Creation]
Core Principles
- Kanban Optimization: Each work item sized for <8 hours to ensure continuous flow
- AI-Assisted Decomposition: Use AI to break down requirements into implementable tasks
- Traceability Integration: Connect issues to implementation files and CI workflows
- Dependency Mapping: Establish clear relationships between work items and epics
- Acceptance-Driven: Each task includes specific, testable acceptance criteria
Work Item Attributes
Generated issues must include:
- Title: Specific, actionable description of the work
- Cycle Time Target: Estimated completion time (4-8 hours)
- Acceptance Criteria: Testable conditions for completion
- File Scope: Which files will be added, updated, or removed
- CI Requirements: Test coverage, pipeline steps, quality gates
- Dependencies: Blocking and enabling relationships with other issues
Epic Relationship Management
- Bidirectional Linking: Parent-child references maintained automatically
- Progress Tracking: Epic completion calculated from subissue status
- Dependency Validation: Automated checking for circular dependencies
- Status Propagation: Subissue changes update epic progress
Implementation Examples: See examples/ai-issue-generation/ for detailed AI prompts, epic breakdown workflows, CI integration patterns, and traceability implementations.
"If a task takes more than one day, split it." – Kanban Guide, Lean Kanban University
"Small, frequent deliveries expose issues early and keep teams aligned." – Agile Alliance, Kanban Glossary
Anti-pattern: Vague Issue Generation Creating generic tasks without specific acceptance criteria, proper sizing, or clear dependencies leads to scope creep and estimation errors.
Anti-pattern: Missing CI Integration Creating issues without CI workflow integration, file tracking, or traceability requirements leads to disconnected development cycles and poor visibility into implementation progress.
Anti-pattern Examples:
❌ "Fix the login page"
❌ "Make the dashboard better"
❌ "Add some tests"
❌ "AUTH-002: Implement password validation" (no file tracking or CI requirements)
âś… "Add OAuth 2.0 token validation endpoint (8 hours)"
âś… "Implement dashboard metric WebSocket connection (6 hours)"
âś… "Write unit tests for user service login method (4 hours)"
âś… "AUTH-002: Password validation service with CI integration"
- Files: src/auth/validators.py, tests/test_validators.py
- Coverage: 95%, unit + integration tests
- CI: lint, test, security-scan must pass
Development patterns provide tactical approaches for day-to-day AI-assisted coding workflows, focusing on quality, maintainability, and team collaboration.
Maturity: Intermediate
Description: Use executable specifications to guide AI code generation with clear acceptance criteria before implementation.
Core Principle: Precision Enables Productivity
SpecDriven AI combines three key elements:
- Machine-readable specifications with unique identifiers and authority levels
- Rigorous Test-Driven Development with coverage tracking and automated validation
- AI-powered implementation with persistent context through structured specifications
Key Innovation: Authority Level System
Specifications use authority levels to resolve conflicts and establish precedence:
authority=system
: Core business logic and security requirements (highest precedence)authority=platform
: Infrastructure and technical architecture decisionsauthority=feature
: User interface and experience requirements (lowest precedence)
When requirements conflict, higher authority levels take precedence, enabling clear decision-making for AI implementation.
Related Patterns: AI Developer Lifecycle, AI Tool Integration, Comprehensive AI Testing Strategy, Observable AI Development
SpecDriven AI Workflow
graph TD
A[Machine-Readable Specifications<br/>with Authority Levels] --> B[Coverage Tracking<br/>& Validation]
B --> C[AI Implementation<br/>with Ephemeral Prompts]
C --> D[Automated Testing<br/>& Compliance Check]
D --> E{Specs Pass?}
E -->|No| F[Refine Prompts<br/>Not Specs]
F --> C
E -->|Yes| G[Coverage Report<br/>& Deployment]
G --> H[Specification Persistence<br/>for Regression]
style A fill:#e1f5e1
style B fill:#e1f5e1
style H fill:#e1f5e1
style C fill:#ffe6e6
style F fill:#ffe6e6
Core Implementation
Machine-Readable Specification with Authority Levels
# IAM Policy Generator Specification {#iam_policy_gen}
## CLI Requirements {#cli_requirements authority=system}
The system MUST provide a command-line interface that:
- Accepts policy type via `--policy-type` flag
- Validates input parameters against AWS IAM constraints
- Generates syntactically correct IAM policy JSON [^test_iam_syntax]
- Returns exit code 0 for success, 1 for validation errors
## Input Validation {#input_validation authority=platform}
The system MUST:
- Reject invalid AWS service names with clear error messages
- Validate resource ARN format before policy generation
- Implement rate limiting for API calls [^test_rate_limit]
[^test_iam_syntax]: tests/test_iam_policy_syntax.py
[^test_rate_limit]: tests/test_rate_limiting.py
Automated Coverage Tracking
# Run specification compliance validation
pytest --cov=src --cov-report=html --spec-coverage
python spec_validator.py --check-coverage --authority-conflicts
# Output shows specification coverage
# Specification Coverage Report:
# âś… cli_requirements: 100% (3/3 tests linked)
# âś… input_validation: 85% (6/7 tests linked)
# ⚠️ Missing test: [^test_malformed_arn] in line 23
Tooling Integration
# Pre-commit hook validates specification compliance
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: spec-coverage
name: Specification Coverage Check
entry: python spec_validator.py --check-coverage
language: python
pass_filenames: false
# Git workflow with specification traceability
git commit -m "feat: implement rate limiting [spec:rl2c]
Implements rate limiting requirement from input_validation
section. Closes specification anchor #failed_auth.
Coverage: 94% (31/33 spec requirements covered)"
Key Benefits
- Authority-based conflict resolution prevents requirement ambiguity
- Automated coverage tracking ensures no specifications are missed
- AI tool independence through persistent, machine-readable requirements
- Precise traceability from specification anchors to test implementations
- Living documentation that evolves with the system
Complete Implementation
See examples/specification-driven-development/ for:
- Complete IAM Policy Generator implementation
- spec_validator.py tool for automated compliance checking
- Pre-commit hooks and Git workflow integration
- Full specification examples with authority levels
- Coverage tracking and reporting tools
Anti-pattern: Implementation-First AI Writing code with AI first, then trying to retrofit tests, resulting in tests that mirror implementation rather than specify behavior.
Anti-pattern: Prompt Hoarding Saving collections of prompts as if they were specifications. Prompts are implementation details; specifications are behavioral contracts.
Maturity: Beginner Description: Generate explicit implementation plans before writing code to improve quality, reduce iterations, and enable better collaboration.
Related Patterns: AI Developer Lifecycle, Specification Driven Development, Progressive AI Enhancement, AI Choice Generation
Core Principle: Think Before You Code
Modern AI coding tools provide planning capabilities that allow developers to iterate on implementation approaches before writing any code. This pattern leverages these planning features to:
- Reduce implementation iterations by validating approach upfront
- Improve code quality through structured thinking
- Enable better collaboration via shareable plans
- Minimize context switching between planning and execution
Planning Workflow
graph TD
A[Problem Statement] --> B[Generate Initial Plan]
B --> C[Review & Refine Plan]
C --> D{Plan Approved?}
D -->|No| E[Iterate on Plan]
E --> C
D -->|Yes| F[Execute Implementation]
F --> G[Validate Against Plan]
G --> H{Meets Plan?}
H -->|No| I[Adjust Implementation]
I --> F
H -->|Yes| J[Complete]
style A fill:#e1f5e1
style C fill:#e1f5e1
style F fill:#ffe6e6
style J fill:#e1f5e1
Core Implementation
1. Plan Generation Phase
# Example planning prompt structure
CONTEXT: "Building user authentication for SaaS application"
REQUIREMENTS: "JWT tokens, password reset, rate limiting"
CONSTRAINTS: "Must integrate with existing user table, 2-hour time limit"
REQUEST: "Generate step-by-step implementation plan with:
- Database changes needed
- API endpoints to create/modify
- Security considerations
- Testing approach
- Rollback strategy"
2. Plan Review and Iteration
# Generated Plan Review Checklist
### Technical Approach
- [ ] Database schema changes are backwards compatible
- [ ] API design follows existing conventions
- [ ] Security measures address OWASP top 10
- [ ] Performance impact is minimal
### Implementation Strategy
- [ ] Tasks are broken into deployable increments
- [ ] Dependencies are clearly identified
- [ ] Rollback plan is feasible
- [ ] Testing strategy covers edge cases
### Resource Requirements
- [ ] Time estimate is realistic
- [ ] Required permissions are available
- [ ] External dependencies are identified
3. Execution with Plan Validation
# During implementation, validate against plan
echo "âś“ Step 1: Created user_sessions table (matches plan)"
echo "âś“ Step 2: Added JWT service (matches plan)"
echo "âš Step 3: Rate limiting - using Redis instead of in-memory (plan deviation documented)"
Tool-Agnostic Planning Approach
Planning Session Structure
## 1. Problem Definition (2-3 sentences)
Clear statement of what needs to be built and why
## 2. Constraints & Requirements
- Technical constraints (existing systems, performance, security)
- Business requirements (timeline, user experience, compliance)
- Resource constraints (team size, expertise, budget)
## 3. Implementation Options
- Option A: [Brief description, pros/cons, time estimate]
- Option B: [Brief description, pros/cons, time estimate]
- Recommended: [Choice with justification]
## 4. Detailed Plan
- [ ] Step 1: [Concrete action with acceptance criteria]
- [ ] Step 2: [Concrete action with acceptance criteria]
- [ ] Step 3: [Concrete action with acceptance criteria]
## 5. Validation Strategy
How to verify each step works and overall solution meets requirements
When to Use Plan-First Development
- Complex Features: Multi-step implementations requiring coordination
- Unknown Domains: Working in unfamiliar technologies or business areas
- Team Collaboration: When multiple developers need to understand the approach
- High-Stakes Changes: Security, performance, or business-critical modifications
- Learning Contexts: When using AI to explore new implementation approaches
Complete Implementation
See examples/ai-plan-first-development/ for:
- Tool-specific planning examples (Claude Code, Cursor)
- Planning templates and checklists
- Markdown iteration techniques and stakeholder review cycles
- Integration with existing development workflows
- Plan validation and iteration strategies
Anti-pattern: Blind Code Generation Immediately jumping to code generation without understanding the problem scope, constraints, or implementation options, leading to over-engineered or incorrect solutions.
Anti-pattern: Analysis Paralysis Spending excessive time refining plans without moving to implementation, missing opportunities for rapid feedback and iterative improvement.
Maturity: Beginner
Description: Build complex features through small, deployable iterations rather than big-bang generation.
Related Patterns: AI Plan-First Development, AI Developer Lifecycle, Constraint-Based AI Development, AI Choice Generation, AI-Driven Architecture Design
Examples Building authentication progressively:
# Day 1: Minimal login
"Create POST /login that returns 200 for admin/admin, 401 otherwise"
→ Deploy
# Day 2: Real password check
"Modify login to check passwords against users table. Keep existing API."
→ Deploy
# Day 3: Add security
"Add bcrypt hashing to login. Support both hashed and plain passwords temporarily."
→ Deploy
# Day 4: Modern tokens
"Replace session with JWT. Keep session endpoint for backward compatibility."
→ Deploy
Developer Review Required: Each iteration requires developer review and testing of AI-generated code before deployment.
When to Use Progressive AI Enhancement
- MVP Development: When you need to get to market quickly with minimal features
- Uncertain Requirements: When requirements are likely to change based on user feedback
- Risk Mitigation: When you want to reduce the risk of large, complex implementations
- Continuous Delivery: When you have automated deployment and want rapid iterations
- Learning Projects: When the team is learning new technologies or domains
Anti-pattern: Big Bang Generation Asking AI to "create a complete user management system" results in 5000 lines of coupled, untested code that takes days to review and debug.
Maturity: Intermediate
Description: Generate multiple implementation options for exploration and comparison rather than accepting the first AI solution.
Related Patterns: AI Plan-First Development, Progressive AI Enhancement, Context Window Optimization
Multi-Option Implementation Comparison
# Generate and compare multiple implementation approaches
ai "Generate 3 different authentication approaches for user management:
Option 1 (Performance-focused):
- In-memory JWT with Redis caching
- Sub-10ms token validation, horizontal scaling ready
- Trade-off: Memory intensive, Redis dependency
- Best for: High-traffic APIs (>10k req/sec)
Option 2 (Security-focused):
- Database-backed sessions with audit trail
- Immediate revocation, multi-factor authentication
- Trade-off: Higher latency, complex session management
- Best for: Banking, healthcare, government
Option 3 (Simplicity-focused):
- Standard JWT with established libraries
- Well-documented patterns, minimal custom code
- Trade-off: Less optimization opportunities
- Best for: Startups, MVPs, small teams
For each option provide:
- 30-minute working prototype
- Performance benchmarks (response time, memory usage)
- Implementation complexity assessment (LOC, dependencies)
- Specific trade-offs and when to choose this approach
Recommend best option based on project constraints and team experience."
Anti-pattern: Analysis Paralysis Generating too many choices or spending more time evaluating options than implementing them.
Maturity: Advanced
Description: Run multiple AI agents concurrently on isolated tasks or environments to maximize development speed and exploration.
Related Patterns: AI Workflow Orchestration, Atomic Task Decomposition, AI Security Sandbox
Agent Coordination Lifecycle
sequenceDiagram
participant M as Manager
participant A1 as Auth Agent
participant A2 as API Agent
participant A3 as Test Agent
participant SM as Shared Memory
participant CS as Conflict Scanner
M->>A1: Start (OAuth2 Task)
M->>A2: Start (REST API Task)
M->>A3: Start (Test Suite Task)
par Parallel Development
A1->>A1: Implement OAuth2 Flow
A1->>SM: Record Learning
and
A2->>A2: Implement REST Endpoints
A2->>SM: Record API Patterns
and
A3->>A3: Generate Integration Tests
A3->>SM: Record Test Patterns
end
SM->>CS: Trigger Conflict Analysis
CS->>M: Report Conflicts/All Clear
M->>M: Merge Components & Cleanup
Core Implementation Approaches
# Container-based isolation
# docker-compose.parallel-agents.yml
services:
agent-auth:
image: ai-dev-environment:latest
volumes:
- ./feature-auth:/workspace:rw
- shared-memory:/shared:ro
environment:
- AGENT_ID=auth-feature
- TASK_ID=implement-oauth2
networks:
- agent-network
agent-api:
image: ai-dev-environment:latest
volumes:
- ./feature-api:/workspace:rw
- shared-memory:/shared:ro
environment:
- AGENT_ID=api-feature
- TASK_ID=implement-rest-endpoints
volumes:
shared-memory:
driver: local
networks:
agent-network:
driver: bridge
internal: true
Git Worktree Parallelization
# Create isolated branches for parallel work
git worktree add -b agent/auth ../agent-auth
git worktree add -b agent/api ../agent-api
git worktree add -b agent/tests ../agent-tests
# Launch agents in parallel
parallel --jobs 3 << EOF
cd ../agent-auth && ai-agent implement-oauth2
cd ../agent-api && ai-agent implement-rest-endpoints
cd ../agent-tests && ai-agent generate-integration-tests
EOF
# Automated conflict detection and merge
for branch in $(git branch -r | grep 'agent/'); do
git checkout -b temp-merge main
if git merge --no-commit --no-ff $branch; then
echo "âś“ No conflicts in $branch"
git merge --abort
else
echo "âš Conflicts detected - using AI resolution"
ai-agent resolve-conflicts --branch $branch
fi
git checkout main && git branch -D temp-merge
done
# Cleanup
git worktree remove ../agent-auth ../agent-api ../agent-tests
Shared Memory & Coordination
# Agent coordination with shared knowledge
class AgentMemory:
def record_learning(self, agent_id, key, value):
"""Thread-safe learning capture with file locking"""
with fcntl.flock(self.lock_file, fcntl.LOCK_EX):
self.memory[agent_id][key] = value
def get_shared_knowledge(self):
"""Consolidated knowledge from all agents"""
return self.consolidated_memory
# Task definition
tasks = {
"auth-service": {
"agent_count": 1,
"isolation": "container",
"dependencies": [],
"instructions": "Implement OAuth2 with JWT tokens"
},
"api-endpoints": {
"agent_count": 2,
"isolation": "worktree",
"dependencies": ["auth-service"],
"instructions": "REST endpoints: user mgmt + CRUD"
}
}
Complete Implementation: See examples/parallelized-ai-agents/ for:
- Full Docker isolation and coordination setup
- Git worktree management and conflict resolution
- Shared memory system with file locking
- Emergency shutdown and safety monitoring
- Task distribution and dependency management
When to Use Parallel Agents
- Complex features requiring multiple specialized implementations
- Time-critical projects where speed trumps coordination overhead
- Exploration phases testing multiple approaches simultaneously
- Large teams with strong DevOps and coordination processes
Source: AI Native Dev - How to Parallelize AI Coding Agents
Anti-pattern: Uncoordinated Parallel Execution Running multiple agents without isolation, shared memory, or conflict resolution leads to race conditions, lost work, and system instability.
Maturity: Intermediate Description: Manage AI context as a finite resource through structured memory schemas, prompt pattern capture, and session continuity protocols for efficient multi-session development.
Related Patterns: Rules as Code, AI-Driven Traceability, Parallelized AI Coding Agents
Core Principles
AI context is a finite resource with diminishing returns. Effective context engineering requires:
- Minimal High-Signal Tokens: Find the smallest set of information that maximizes outcomes
- Just-in-Time Retrieval: Load context dynamically rather than pre-loading everything
- Progressive Disclosure: Explore and discover information as needed, not upfront
Structured Memory Schemas
Persist information outside the context window using standardized memory formats:
# TODO.md - Task tracking across sessions
- [ ] Implement JWT middleware (blocked: key rotation design)
- [x] Add bcrypt password hashing (2024-01-15)
- [ ] Rate limiting (next: research token bucket vs sliding window)
# DECISIONS.log - Architectural decisions with timestamp
2024-01-15 10:30: Use RS256 for JWT (not HS256)
Rationale: Asymmetric keys enable better key rotation
Alternatives: HS256 (simpler but less flexible)
Impact: auth-service, api-gateway
# NOTES.md - Session continuity and discoveries
Session 2024-01-15:
Context: Implementing authentication system
Discoveries: bcrypt has performance issues >100 req/s
Blockers: Need to decide on refresh token storage
Next: Benchmark argon2 as bcrypt alternative
# scratchpad.md - Working memory (cleared after task)
Exploring JWT refresh token flow...
- httpOnly cookies prevent XSS
- Need CSRF protection for cookie-based auth
Prompt Pattern Library
Capture successful prompts and failures with success rates for reuse:
# Initialize knowledge structure
./knowledge-capture.sh --init
# Capture successful pattern
./knowledge-capture.sh --success \
--domain "auth" \
--pattern "JWT Auth" \
--prompt "JWT with RS256, 15min access, httpOnly cookie" \
--success-rate "95%"
# Document failure to avoid repeating
./knowledge-capture.sh --failure \
--domain "auth" \
--bad-prompt "Make auth secure" \
--problem "Too vague → AI over-engineers" \
--solution "Specify exact JWT requirements"
Context Window Management
Compaction Strategy - When context approaches limits:
- Distill critical decisions to
DECISIONS.log
- Summarize key discoveries in
NOTES.md
- Update
TODO.md
with current state and blockers - Create "Previously on..." recap for session continuity
Session Continuity Protocol - Resume work across sessions:
- Read
NOTES.md
for previous session context - Review
TODO.md
for current tasks and blockers - Check
DECISIONS.log
for recent architectural choices - Scan
scratchpad.md
for active explorations
# Compact context when nearing limits
./context-compact.sh --summarize
# Resume from previous session
./session-resume.sh # Displays TODO + recent decisions + notes recap
Complete Implementation: See examples/ai-context-persistence/ for:
- Memory schema templates (TODO.md, DECISIONS.log, NOTES.md, scratchpad.md)
- Context compaction and session resume automation scripts
- Prompt pattern capture and maintenance tools
- Working examples of memory schemas in use
Anti-pattern: Knowledge Hoarding
Creating extensive knowledge bases that become maintenance burdens instead of accelerating development through selective, actionable knowledge capture.
Why it's problematic:
- Knowledge bases become outdated and misleading
- Developers spend more time documenting than developing
- Overly detailed entries are ignored in favor of quick experimentation
- Knowledge becomes siloed and not easily discoverable
Instead, focus on:
- Capture only high-impact patterns (>80% success rate)
- Document failures that wasted significant time (>30 minutes)
- Keep entries concise and immediately actionable
- Review and prune knowledge quarterly
Anti-pattern: Context Bloat
Loading entire codebases, documentation, or conversation history into context rather than using structured memory and just-in-time retrieval.
Why it's problematic:
- Wastes tokens on low-signal information
- Degrades AI performance due to information overload
- Slows interaction latency and increases costs
- Misses the forest for the trees
Instead:
- Use lightweight identifiers (file paths, links) rather than full content
- Load context progressively as needed
- Externalize detailed information to memory schemas
- Prefer 3-5 high-quality examples over exhaustive documentation
Maturity: Beginner
Description: Give AI specific constraints to prevent over-engineering and ensure focused solutions.
Related Patterns: Progressive AI Enhancement, Human-AI Handoff Protocol, AI Choice Generation
Examples
Bad: "Create user service"
Good: "Create user service: <100 lines, 3 methods max, only bcrypt dependency"
Bad: "Add caching"
Good: "Add caching using Map, max 1000 entries, LRU eviction"
Bad: "Improve performance"
Good: "Reduce p99 latency to <50ms without new dependencies"
Anti-pattern: Unconstrained Generation Giving AI vague instructions like "make it better" or "add features" leads to over-engineered solutions that are hard to maintain and review.
Anti-pattern: Constraint Overload
Adding too many constraints ("use exactly 50 lines, 2 methods, no dependencies, 100% test coverage, sub-10ms response time") paralyzes AI decision-making and produces suboptimal solutions.
Maturity: Intermediate
Description: Break complex features into atomic, independently implementable tasks for parallel AI agent execution.
Related Patterns: AI Developer Lifecycle, AI Workflow Orchestration, Progressive AI Enhancement, AI Issue Generation, Parallelized AI Coding Agents
Atomic Task Criteria
graph TD
A[Feature Requirement] --> B[Task Analysis]
B --> C{Atomic Task Check}
C -->|âś“ Independent| D[Can run in parallel]
C -->|âś“ <2 hours| E[Rapid feedback cycle]
C -->|âś“ Clear I/O| F[Testable interface]
C -->|âś“ No shared state| G[Conflict-free]
C -->|âś— Fails check| H[Split Further]
H --> B
D --> I[Ready for Agent]
E --> I
F --> I
G --> I
Core Decomposition Process
# Feature: User Authentication System
# Bad: Monolithic task
❌ "Implement complete user authentication with JWT, password hashing, rate limiting, and email verification"
# Good: Atomic breakdown with AI validation
ai_decompose "Break down user authentication into atomic tasks:
Task 1: Password validation service (1.5h)
- Input: plain text password, validation rules
- Output: validation result object
- Dependencies: None (pure function)
Task 2: JWT token generation service (1h)
- Input: user ID, role, expiration config
- Output: signed JWT token
- Dependencies: None (crypto operations only)
Task 3: Rate limiting middleware (2h)
- Input: request metadata, rate limit config
- Output: allow/deny decision
- Dependencies: None (stateless logic)
Task 4: Login endpoint integration (1h)
- Input: credentials, services from tasks 1-3
- Output: HTTP response with token/error
- Dependencies: Tasks 1-3 (integration only)"
# Validate atomicity
ai_task_validator "Check each task for:
1. <2 hour completion time
2. No shared mutable state
3. Clear input/output contracts
4. Testable in isolation
5. No circular dependencies"
Agent Assignment & Coordination
# .ai/task-assignment.yml
authentication_feature:
parallel_tasks:
- id: "auth-001" # Password validation
agent: "backend-specialist-1"
estimated_hours: 1.5
dependencies: []
- id: "auth-002" # JWT generation
agent: "security-specialist"
estimated_hours: 1
dependencies: []
- id: "auth-003" # Rate limiting
agent: "backend-specialist-2"
estimated_hours: 2
dependencies: []
integration_tasks:
- id: "auth-004" # Login endpoint
agent: "integration-specialist"
estimated_hours: 1
dependencies: ["auth-001", "auth-002", "auth-003"]
Task Contract Validation
# Ensure tasks meet atomic criteria
class TaskContract:
def validate_atomic(self) -> bool:
return all([
len(self.side_effects) == 0, # No side effects
self.estimated_hours <= 2, # Rapid completion
self.has_clear_io_contract() # Testable interface
])
# Example validation
task = TaskContract("auth-001")
task.inputs = {"password": str, "rules": PasswordRules}
task.outputs = {"is_valid": bool, "errors": List[str]}
assert task.validate_atomic() # âś“ Passes atomic criteria
Complete Implementation: See examples/atomic-task-decomposition/ for:
- Contract validation system with automated checking
- Function-level decomposition techniques and trigger indicators
- Task dependency resolution and scheduling
- Parallel execution coordination and monitoring
- Agent assignment and resource management
When to Use Atomic Decomposition
- Parallel Agent Implementation: Multiple AI agents working simultaneously
- Complex Feature Development: Large features benefiting from parallel work
- Time-Critical Projects: Speed through parallelization essential
- Risk Mitigation: Reduce blast radius of individual task failures
Anti-pattern: Pseudo-Atomic Tasks Creating tasks that appear independent but secretly share state, require specific execution order, or have hidden dependencies on other concurrent work.
Anti-pattern: Over-Decomposition
Breaking tasks so small that coordination overhead exceeds the benefits of parallelization, leading to more complexity than value.
Maturity: Intermediate
Description: Design systems with comprehensive logging, tracing, and debugging capabilities that enable AI to understand system behavior and diagnose issues effectively.
Related Patterns: AI Developer Lifecycle, AI Tool Integration, Comprehensive AI Testing Strategy, AI-Driven Traceability
Core Implementation
# AI-friendly structured logging
def log_operation(operation, **context):
logging.info(json.dumps({
"timestamp": datetime.utcnow().isoformat(),
"operation": operation,
"context": context
}))
# Observable business logic with comprehensive context
def process_order(order):
log_operation("order_start", order_id=order.id, total=order.total)
try:
validate_order(order)
log_operation("validation_success")
result = charge_payment(order)
log_operation("payment_success", transaction_id=result.id)
return result
except PaymentError as e:
log_operation("payment_error", error=str(e), code=e.code)
raise
Complete Implementation: See examples/observable-ai-development/ for:
- Full structured logging framework with correlation IDs
- Performance monitoring decorators and utilities
- AI-friendly debug tools and log analysis scripts
- Integration examples for e-commerce and authentication systems
Anti-pattern: Black Box Development
Building systems with minimal observability that provide insufficient context for AI to understand system behavior, diagnose issues, or suggest improvements.
Why it's problematic: AI cannot debug systems with generic logs like "Payment failed" or "Something went wrong" - it needs specific context, timing, and error details.
# Bad: Black box logging
def process_payment(amount):
try:
result = payment_service.charge(amount)
logger.info("Payment processed")
return result
except Exception:
logger.error("Payment failed")
raise
# Good: Observable implementation
def process_payment(amount):
log_operation("payment_start", amount=amount)
try:
result = payment_service.charge(amount)
log_operation("payment_success", transaction_id=result.id)
return result
except Exception as e:
log_operation("payment_error", error=str(e), amount=amount)
raise
Maturity: Intermediate
Description: Systematic code improvement using AI to detect and resolve code smells with measurable quality metrics, following established refactoring rules and maintaining test coverage throughout the process.
Related Patterns: Rules as Code, Comprehensive AI Testing Strategy, Technical Debt Forecasting
Code Smell Detection Framework
graph TD
A[Code Analysis] --> B[Smell Detection]
B --> C[Refactoring Strategy]
C --> D[AI Implementation]
D --> E[Test Validation]
E --> F[Quality Metrics]
F --> G{Improvement?}
G -->|Yes| H[Commit Changes]
G -->|No| I[Revert & Retry]
H --> J[Update Knowledge Base]
I --> C
Core Workflow
# 1. Define refactoring rules
cat > .ai/rules/refactoring.md << 'EOF'
## Long Method Smell
- Max lines: 20 (excluding docstrings)
- Max cyclomatic complexity: 10
- Detection: flake8 C901, pylint R0915
## Large Class Smell
- Max class lines: 250, Max methods: 20
- Detection: pylint R0902, R0904
EOF
# 2. Detect code smells with AI
flake8 --select=C901 src/ > smells.txt
pylint src/ --disable=all --enable=R0915,R0902,R0904 >> smells.txt
ai "Analyze smells.txt using .ai/rules/refactoring.md:
1. Prioritize by impact and complexity
2. Suggest specific refactoring strategy for each smell
3. Generate implementation plan with risk assessment"
# 3. Apply refactoring with test preservation
pytest --cov=src tests/ # Baseline coverage
ai "Refactor process_user_data() method (35 lines, exceeds threshold):
- Apply Extract Method pattern for validation, database, notifications
- Maintain test coverage >90% and API contract
- Create atomic commits for each extracted method"
# 4. Validate and track improvements
pytest --cov=src tests/
flake8 src/ && pylint src/
ai "Generate refactoring impact report:
Before: complexity=12, length=35 lines, coverage=85%
After: complexity=4+2+2, length=8+6+7 lines, coverage=92%
Document lessons learned in .ai/knowledge/refactoring.md"
Common Refactoring Patterns
- Extract Method: Break down long methods (>20 lines)
- Extract Class: Split large classes (>250 lines, >20 methods)
- Replace Primitive: Convert strings/dicts to value objects
- Consolidate Duplicates: Merge similar code patterns
Complete Implementation: See examples/ai-driven-refactoring/ for:
- Automated refactoring pipeline with CI integration
- Quality metrics tracking and reporting
- Risk assessment guidelines and rollback procedures
- Knowledge base templates for refactoring outcomes
Anti-pattern: Shotgun Surgery Making widespread changes without systematic analysis leads to introduced bugs and degraded code quality.
Anti-pattern: Speculative Refactoring Refactoring code for hypothetical future requirements rather than addressing current code smells and quality issues.
Maturity: Intermediate
Description: Apply architectural frameworks (DDD, Well-Architected, 12-Factor) using AI to ensure sound system design and maintainable code structure.
Related Patterns: AI Developer Lifecycle, Rules as Code, AI-Driven Refactoring
Example Frameworks
- Domain-Driven Design (DDD): Bounded contexts, entities, value objects
- AWS Well-Architected: 6 pillars compliance assessment
- 12-Factor App: Cloud-native application principles
- Event-Driven Architecture: Event sourcing and saga patterns
- ADRs: Architecture Decision Records generation
Core Implementation: Domain-Driven Design Analysis
# Create DDD analysis prompt
cat > .ai/prompts/ddd-analysis.md << 'EOF'
# Domain-Driven Design Analysis
Analyze user stories and generate:
1. Bounded context boundaries
2. Core entities and value objects
3. Domain services and repositories
4. Integration patterns between contexts
Return bounded context map and suggested code structure.
EOF
# Run domain analysis
ai-assistant analyze-domain \
--input requirements/user-stories.md \
--framework ddd \
--output architecture/domain-model.md
Example Output: E-commerce Domain Model
Bounded Contexts:
- Order Management: Order, OrderItem, OrderStatus
- Payment: Payment, PaymentMethod, Transaction
- Inventory: Product, Stock, Warehouse
- Customer: Customer, Address, Preferences
Integration:
- Order → Payment (via PaymentRequested event)
- Order → Inventory (via StockReservation command)
Anti-pattern: Architecture Astronaut AI
Letting AI generate over-engineered solutions with complex patterns and frameworks without considering business constraints, team capabilities, or actual requirements.
Why it's problematic: AI creates over-complex solutions (microservices + CQRS + event sourcing) when simple CRUD would suffice.
# Good: Constrained architecture
ai-assistant design-architecture \
--requirements requirements/user-stories.md \
--constraints "team_size=5,experience=intermediate,timeline=3months"
# Bad: Unconstrained
ai-assistant design-architecture \
--requirements requirements/user-stories.md \
--generate "enterprise_patterns,microservices,event_sourcing,cqrs"
Maturity: Intermediate
Description: Maintain automated links between requirements, specifications, tests, implementation, and documentation using AI.
Related Patterns: AI Developer Lifecycle, Specification Driven Development, Comprehensive AI Testing Strategy
Core Implementation
# Automated traceability maintenance
./maintain_traceability.sh
# Check new code for requirement links and validate existing ones
git diff --name-only HEAD~1 | while read file; do
ai "Analyze $file and suggest requirement traceability links"
done
# Generate impact analysis for recent changes
ai "Map recent changes to affected requirements and tests"
Complete Implementation: See examples/ai-driven-traceability/ for:
- Complete traceability maintenance automation system
- Link validation and gap analysis tools
- Impact analysis and reporting scripts
- Integration with project management tools (GitHub, JIRA)
Anti-pattern: Manual Traceability Management Maintaining requirement links in spreadsheets or manual documentation that becomes stale and inaccurate over time.
Operations patterns focus on CI/CD, security, compliance, and production management with AI assistance, building on the foundation and development patterns.
Maturity: Advanced
Description: Transform compliance requirements into executable Cedar/OPA policy files with AI assistance, ensuring regulatory requirements become testable code.
Related Patterns: AI Security Sandbox, Rules as Code
# Transform compliance requirements into executable policies
ai "Convert compliance requirements into Cedar policy code:
SOC 2: Data at rest must be AES-256 encrypted" > encryption.cedar
# Test generated policies
opa test encryption.cedar
Complete Implementation: See examples/policy-as-code-generation/ for:
- Complete policy generation pipeline with AI assistance
- Cedar/OPA policy templates and compliance mapping
- Policy testing and validation frameworks
- CI/CD integration examples
Anti-pattern: Manual Policy Translation Hand-coding policies from written requirements introduces inconsistencies and interpretation errors.
Maturity: Intermediate
Description: Aggregate multiple security tools and use AI to summarize findings for actionable insights, reducing alert fatigue while maintaining security rigor.
Related Patterns: Policy-as-Code Generation
# Orchestrate multiple security tools
snyk test --json > snyk.json
bandit -r src -f json > bandit.json
trivy fs --format json . > trivy.json
# AI-powered summarization for actionable insights
ai "Summarize security findings; focus on CRITICAL issues" > pr-comment.txt
gh pr comment --body-file pr-comment.txt
Complete Implementation: See examples/security-scanning-orchestration/ for:
- Complete security scanning pipeline with tool orchestration
- AI-powered report summarization and prioritization
- CI/CD integration and automated PR commenting
- Custom security tool configurations and reporting
Anti-pattern: Alert Fatigue Posting every low-severity finding buries real issues and frustrates developers.
Maturity: Advanced
Description: Establish intelligent performance baselines and configure monitoring thresholds automatically, minimizing false positives while catching real performance issues.
Related Patterns: Observable AI Development
# Collect performance metrics and generate intelligent baselines
aws cloudwatch get-metric-statistics --period 86400 > perf.csv
ai "From perf.csv, recommend latency thresholds and autoscale policies" > perf-policy.json
deploy-tool apply perf-policy.json
Complete Implementation: See examples/performance-baseline-management/ for:
- Complete performance monitoring setup with baseline establishment
- AI-powered threshold calculation and alert configuration
- Autoscaling policy generation and deployment automation
- Integration with multiple monitoring platforms (CloudWatch, Prometheus, etc.)
Anti-pattern: One-Off Alerts Manual thresholds quickly become stale, causing alert storms or blind spots.
- Rushing Into AI: Starting AI adoption without proper assessment
- Context Drift: Inconsistent AI rules across team members
- Unrestricted Access: Allowing AI tools access to sensitive data
- Ad-Hoc Development: Skipping structured development lifecycle
- Implementation-First AI: Writing code before defining acceptance criteria
- Test Generation Without Strategy: Creating tests without coherent quality goals
- Big Bang Generation: Attempting complex features in single AI interaction
- Uncoordinated Multi-Tool Usage: Using multiple AI tools without orchestration
- Black Box Systems: Insufficient logging for AI debugging
- Unclear Boundaries: Ambiguous human-AI handoff points
- Fragmented Security: Isolated security tools without unified framework
- Alert Fatigue: Overwhelming developers with low-priority findings
- Static Deployment: Fixed scripts without AI adaptation
- Trusting AI Blue-Green Generation: Accepting AI output without validation for deployment patterns
- Reactive Maintenance: Firefighting instead of proactive AI-assisted management
- Blind Chaos Testing: Random fault injection without understanding dependencies
- AI Readiness Assessment - Evaluate team and codebase readiness
- Rules as Code - Establish consistent AI coding standards
- AI Security Sandbox - Implement secure AI tool isolation
- AI Developer Lifecycle - Define structured development process
- AI Issue Generation - Generate structured work items from requirements
- Specification Driven Development - Implement specification-first approach
- Progressive AI Enhancement - Practice iterative development
- AI Choice Generation - Generate multiple implementation options
- Atomic Task Decomposition - Break down complex features
- Security & Compliance Patterns - Implement unified security framework
- Deployment Automation Patterns - Establish AI-powered CI/CD
- Monitoring & Maintenance Patterns - Deploy proactive system management
Note: For teams practicing continuous delivery, implement security (AI Security Sandbox, AI Security & Compliance) and deployment patterns (Deployment Automation) from week 1 alongside foundation patterns. The phases represent learning dependencies, not deployment sequences.
- Team readiness score improvement
- Consistent AI rule adherence across projects
- Zero credential leaks in AI-generated code
- Reduced onboarding time for new developers
- Test coverage maintenance (>90% for AI-generated code)
- Reduced code review cycles
- Faster feature delivery with maintained quality
- Decreased debugging time for AI-generated issues
- Automated policy compliance verification
- Reduced deployment failures
- Faster incident response with AI-generated runbooks
- Proactive technical debt management
Have a pattern that's working well for your team? Open an issue or PR to share your experience. The AI development landscape is evolving rapidly, and we're all learning together.
- Follow the established pattern template (Maturity, Description, Related Patterns, Examples, Anti-patterns)
- Include practical, runnable examples
- Specify clear success criteria and anti-patterns
- Reference related patterns appropriately
- Test patterns with multiple AI tools when applicable
MIT License - See LICENSE file for details.