Skip to content

AI Features

Robert Allen edited this page Dec 15, 2025 · 1 revision

AI Features

git-adr includes optional AI-powered features to help you draft, improve, and query your Architecture Decision Records. These features require installation of the AI extras and configuration of an AI provider.


Installation

Install git-adr with AI support:

pip install "git-adr[ai]"

# Or with all features
pip install "git-adr[all]"

Configuration

Supported Providers

Provider Models API Key Variable
OpenAI GPT-4, GPT-4 Turbo, GPT-3.5 OPENAI_API_KEY
Anthropic Claude 3 Opus, Sonnet, Haiku ANTHROPIC_API_KEY
Google Gemini Pro, Gemini 1.5 GOOGLE_API_KEY
Azure OpenAI GPT-4 (Azure-hosted) AZURE_OPENAI_API_KEY
AWS Bedrock Claude, Titan, etc. AWS credentials
Ollama Llama 2, Mistral, CodeLlama (none - local)
OpenRouter Multiple providers OPENROUTER_API_KEY

Basic Setup

# Set provider
git adr config set adr.ai.provider openai

# Set API key (environment variable)
export OPENAI_API_KEY="sk-..."

Provider-Specific Setup

OpenAI

git adr config set adr.ai.provider openai
git adr config set adr.ai.model gpt-4-turbo
export OPENAI_API_KEY="sk-..."

Anthropic Claude

git adr config set adr.ai.provider anthropic
git adr config set adr.ai.model claude-3-opus-20240229
export ANTHROPIC_API_KEY="sk-ant-..."

Google Gemini

git adr config set adr.ai.provider google
git adr config set adr.ai.model gemini-pro
export GOOGLE_API_KEY="..."

Local Ollama (No API Key Required)

# Start Ollama server
ollama serve

# Configure git-adr
git adr config set adr.ai.provider ollama
git adr config set adr.ai.model mistral

Ollama is ideal for:

  • Privacy-sensitive environments
  • Offline usage
  • Avoiding API costs
  • Experimentation

AI Commands

git adr ai draft

AI-guided ADR creation with interactive elicitation.

git adr ai draft "Choose a message queue"

How it works:

By default, the AI uses interactive elicitation mode - asking you questions to understand your decision:

  1. "What problem are you solving?"
  2. "What options have you considered?"
  3. "What's driving this decision?"
  4. "What are the trade-offs/consequences?"

The AI then synthesizes your answers into a complete ADR.

Options:

Option Description
--batch Generate in one shot without interaction
--template <format> Specify template format
--context <file> Provide additional context from a file

Examples:

# Interactive mode (default)
git adr ai draft "Use Kubernetes for container orchestration"

# Batch mode for scripting
git adr ai draft "Use Kubernetes" --batch

# With specific template
git adr ai draft "Use Kubernetes" --template business

git adr ai suggest

Get AI suggestions to improve an existing ADR.

git adr ai suggest <adr-id>

What it analyzes:

  • Missing sections or thin content
  • Unclear consequences
  • Alternatives that should be considered
  • Potential risks not mentioned
  • Links to related decisions

Example:

git adr ai suggest 20250115-use-postgresql

# Output:
# Suggestions for ADR: Use PostgreSQL for primary database
#
# 1. MISSING CONTENT: Consider adding backup/recovery strategy to consequences
# 2. ALTERNATIVES: MongoDB and CockroachDB are notable alternatives not discussed
# 3. RISKS: No mention of scaling limitations for write-heavy workloads
# 4. LINKS: Consider linking to ADR-005 (caching strategy) which relates to this

git adr ai summarize

Generate natural language summaries of your decisions.

git adr ai summarize [options]

Options:

Option Description
--format <format> Output format (text, slack, email, bullets)
--status <status> Only summarize ADRs with specific status
--since <date> Only summarize ADRs since date
--count <n> Limit to n most recent ADRs

Examples:

# Summarize all accepted decisions
git adr ai summarize --status accepted

# Summary for Slack
git adr ai summarize --format slack --since 2025-01-01

# Executive summary of last 5 decisions
git adr ai summarize --count 5 --format bullets

Sample Output (Slack format):

*Architecture Decisions Summary (Jan 2025)*

:white_check_mark: *PostgreSQL for primary database* - Chose PostgreSQL for ACID compliance and team expertise. Trade-off: more complex scaling.

:white_check_mark: *Redis for session caching* - Selected Redis for sub-ms latency. Trade-off: additional infrastructure to manage.

:thinking_face: *GraphQL API layer* - Proposed but not yet accepted. Evaluating against REST continuation.

_3 decisions, 2 accepted, 1 proposed_

git adr ai ask

Ask questions about your ADRs in natural language.

git adr ai ask "<question>"

Example queries:

# Factual questions
git adr ai ask "Why did we choose PostgreSQL?"

# Comparative questions
git adr ai ask "What databases have we considered?"

# Timeline questions
git adr ai ask "What decisions were made in the last month?"

# Impact questions
git adr ai ask "What are the consequences of our caching strategy?"

# Relationship questions
git adr ai ask "Which decisions affect the authentication system?"

How it works:

The AI searches across all your ADRs, understands the context and relationships, and provides a synthesized answer with references to specific ADRs.


AI Settings

Temperature

Controls randomness of AI responses.

git adr config set adr.ai.temperature 0.7  # Balanced (default)
git adr config set adr.ai.temperature 0.3  # More focused
git adr config set adr.ai.temperature 0.9  # More creative
Temperature Best For
0.0 - 0.3 Factual summaries, consistent output
0.5 - 0.7 General drafting, balanced creativity
0.8 - 1.0 Brainstorming alternatives, creative exploration

Model Selection

Different models have different capabilities and costs:

Model Speed Quality Cost
GPT-3.5 Turbo Fast Good Low
GPT-4 Medium Excellent High
GPT-4 Turbo Fast Excellent Medium
Claude 3 Haiku Very Fast Good Low
Claude 3 Sonnet Fast Very Good Medium
Claude 3 Opus Medium Excellent High
Ollama (local) Varies Good Free

Best Practices

For AI Drafting

  1. Provide clear titles - The title guides the AI's understanding
  2. Use interactive mode - Answering questions produces better ADRs
  3. Review and edit - AI drafts are starting points, not final documents
  4. Add specifics - AI doesn't know your exact constraints; add them

For AI Suggestions

  1. Run on drafts - Get suggestions before finalizing
  2. Consider all suggestions - Even if you don't implement them
  3. Update the ADR - Address valid concerns in the document

For Summaries

  1. Audience matters - Use appropriate format (slack vs email vs bullets)
  2. Focus on recent - Old decisions may have outdated context
  3. Include status - Makes it clear what's decided vs proposed

For Questions

  1. Be specific - "Why PostgreSQL?" is better than "Tell me about databases"
  2. Follow up - Ask clarifying questions based on answers
  3. Cross-reference - Check the cited ADRs for full context

Privacy Considerations

Cloud Providers (OpenAI, Anthropic, Google)

  • Your ADR content is sent to external APIs
  • Review provider privacy policies
  • Consider sensitivity of decision content
  • API providers may log requests (check terms)

Local Ollama

  • All processing stays on your machine
  • No data leaves your system
  • Ideal for sensitive projects
  • May have lower quality than cloud models

Hybrid Approach

Use Ollama for sensitive drafts, cloud providers for public projects:

# Sensitive project
git adr config set adr.ai.provider ollama

# Public project
git adr config set adr.ai.provider openai

Troubleshooting

"AI provider not configured"

# Set a provider
git adr config set adr.ai.provider openai

# Verify
git adr config get adr.ai.provider

"API key not found"

# Check environment variable
echo $OPENAI_API_KEY

# Set if missing
export OPENAI_API_KEY="sk-..."

# Add to shell config for persistence
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc

"Model not available"

# Check available models with your provider
# For Ollama:
ollama list

# Update model config
git adr config set adr.ai.model gpt-4-turbo

Ollama Connection Failed

# Start Ollama server
ollama serve

# Check it's running
curl http://localhost:11434/api/tags

# Pull a model if needed
ollama pull mistral

Rate Limiting

If you hit API rate limits:

  1. Wait and retry
  2. Use a less aggressive model
  3. Batch operations instead of many small requests
  4. Consider Ollama for development/testing

See Also

Clone this wiki locally