Skip to content

Conversation

@jeremyeder
Copy link
Contributor

Summary

Implements a community-driven leaderboard where users can submit their AgentReady assessment results via CLI. Submissions are validated through GitHub Actions and displayed on the existing GitHub Pages site.

Features

Self-Service Submission: agentready submit command creates PR automatically
Anti-Gaming Validation: Ownership verification + re-assessment
Multiple Views: Overall, by-language, by-size, most-improved
Historical Tracking: Multiple submissions per repo show improvement over time
Integrated with Docs: Leaderboard pages added to existing Jekyll site

Components

  1. CLI Submit Command (src/agentready/cli/submit.py)

    • Uses PyGithub to create PR automatically
    • Verifies user has commit access to submitted repo
    • Generates unique timestamp-based filenames
  2. Validation Workflow (.github/workflows/validate-leaderboard-submission.yml)

    • Re-runs assessment on submitted repository
    • Compares claimed vs actual score (±2 point tolerance)
    • Checks repository is public and submitter has access
    • Comments on PR with validation results
  3. Aggregation Script (scripts/generate-leaderboard-data.py)

    • Scans submissions/ directory
    • Generates docs/_data/leaderboard.json
    • Calculates rankings by score, language, size
    • Identifies "most improved" repositories
  4. Leaderboard Pages (docs/leaderboard/)

    • Main leaderboard with top 10 cards + full table
    • By-language rankings
    • Most-improved tracking
    • Tier-based color coding (Platinum/Gold/Silver/Bronze)

Implementation Plan

See complete specification: specs/leaderboard-feature-spec.md

Phase 1: CLI Submit Command (foundation)
Phase 2: Validation Workflow (anti-gaming)
Phase 3: Aggregation Script (data pipeline)
Phase 4: Leaderboard Pages (UI)

Example Usage

# Developer workflow
cd ~/my-awesome-project

# 1. Run assessment
agentready assess .

# 2. Submit to leaderboard
export GITHUB_TOKEN=ghp_xxxxx
agentready submit

# 3. Wait for validation → merge → leaderboard updates

Design Decisions

  • ISO 8601 Timestamps: 2025-12-03T14-30-45-assessment.json (unique, sortable)
  • Submission Path: submissions/{org}/{repo}/{timestamp}-assessment.json
  • Score Tolerance: ±2 points (accounts for minor variations)
  • Rate Limiting: 1 submission per repo per 24 hours
  • Static Site Generation: Leverages existing Jekyll/GitHub Pages

Security & Anti-Gaming

  • Ownership verification (submitter must have commit access)
  • Re-assessment on validation (never trust submitted scores)
  • Public repo requirement (transparent)
  • Sandboxed assessment runs (isolated /tmp directory)
  • Rate limiting per repository

Future Enhancements (Out of Scope)

  • Badges for README ([![AgentReady Score](https://img.shields.io/badge/...)](...))
  • API endpoint for programmatic access
  • Historical charts (score trends over time)
  • Filters/search on leaderboard UI

Ready for Review: The spec is complete and implementation-ready. This PR will be updated with actual implementation commits following the 4-phase plan.

🤖 Generated with Claude Code

Complete cold-start implementation guide for community leaderboard:
- CLI submit command (agentready submit)
- GitHub Action validation workflow
- Leaderboard aggregation script
- Jekyll pages integrated with existing docs

Enables self-service repository submissions with anti-gaming measures.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: c443dea5
Assessed: December 03, 2025 at 9:44 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 137 files
  • Markdown: 99 files
  • YAML: 21 files
  • JSON: 9 files
  • Shell: 6 files

Repository Stats

  • Total Files: 317
  • Total Lines: 175,959

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 449/1369
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 55

❌ File Size Limits

Measured: 2 huge, 8 large out of 137 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.5% of 137 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: c443dea
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

jeremyeder and others added 2 commits December 3, 2025 16:48
- Add PyGithub>=2.1.1 dependency for GitHub API integration
- Create submit.py with full PR creation workflow
- Validates GitHub token, assessment file, and repository access
- Generates unique ISO 8601 timestamp filenames
- Verifies submitter has commit access to repository
- Creates fork, branch, commits assessment, and opens PR
- Includes dry-run mode for testing
- Comprehensive error handling and user guidance

Submission workflow:
1. User runs: agentready submit
2. Command validates token and assessment
3. Verifies user has commit access to repo
4. Creates PR to agentready/agentready automatically
5. Validation workflow will run (next phase)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
**Phase 2**: Validation GitHub Action
- validate-leaderboard-submission.yml workflow
- Validates JSON schema, repo access, score accuracy (±2 tolerance)
- Re-runs assessment for verification
- Posts validation results as PR comments
- Secure: all user input via environment variables

**Phase 3**: Aggregation Script & Workflow
- scripts/generate-leaderboard-data.py for data generation
- Scans submissions/ directory, groups by repository
- Generates docs/_data/leaderboard.json for Jekyll
- Calculates overall, by-language, by-size, most-improved rankings
- update-leaderboard.yml workflow triggers on submissions merge

**Phase 4**: Jekyll Leaderboard Pages
- docs/leaderboard/index.md with top 10 cards + full table
- Tier-based color coding (Platinum/Gold/Silver/Bronze)
- Responsive CSS styling in docs/assets/css/leaderboard.css
- Added to navigation in _config.yml
- Empty leaderboard.json for initial build

**Code Quality**:
- All files formatted with black
- Imports sorted with isort
- Ruff linting passed
- Security: no command injection vulnerabilities in workflows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

⚠️ Broken links found in documentation. See workflow logs for details.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: e2505626
Assessed: December 03, 2025 at 9:53 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,255

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: e250562
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

- Replace template literals with string concatenation in github-script
- Add missing newlines at end of workflow files
- Fix ruff E402 error in regenerate_heatmap.py (import after sys.path)
- Ensures pre-commit check-yaml passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

⚠️ Broken links found in documentation. See workflow logs for details.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 5f5d218a
Assessed: December 03, 2025 at 9:56 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,282

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: 5f5d218
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

jeremyeder and others added 2 commits December 3, 2025 16:57
Add regex pattern to ignore {{ variable }} syntax in markdown-link-check.
This prevents false positives on Jekyll/Liquid template variables like
{{ entry.url }} in leaderboard pages.

Fixes docs-lint workflow failure on PR #146.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Validates links in docs/ markdown files locally before push
- Uses same .markdown-link-check.json config as CI workflow
- Prevents broken link issues from reaching CI

Now runs on every commit that modifies docs/*.md files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: ea9d4504
Assessed: December 03, 2025 at 9:58 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,285

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: ea9d450
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 4791b200
Assessed: December 03, 2025 at 9:59 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,291

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: 4791b20
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

Track which ruleset version was used for each assessment to ensure
fair comparisons. Scores are only directly comparable when assessed
with the same research version, as attributes and weights may change.

**Changes**:

1. **Assessment Model**:
   - Add `research_version` field to AssessmentMetadata
   - Scanner loads version from ResearchLoader during assessment
   - Captured in every assessment JSON file

2. **Leaderboard Data**:
   - Aggregation script extracts `research_version` from submissions
   - Includes version in history for tracking changes over time
   - Added to leaderboard.json for Jekyll display

3. **Leaderboard Display**:
   - New "Ruleset" column in leaderboard table
   - Shows research version for each submission
   - Helps users understand scoring context

4. **Validation Workflow**:
   - Extracts research version from submitted assessment
   - Compares claimed vs actual research version
   - Warns if versions differ (scores not directly comparable)
   - PR comment includes version info and mismatch warnings

**Why This Matters**:
- Research versions may add/remove/reweight attributes
- Comparing scores across versions can be misleading
- Users can now see exactly which ruleset was used
- Historical tracking shows how repositories improve

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 9342896d
Assessed: December 03, 2025 at 10:05 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,346

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: 9342896
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

Both HTML and Markdown reporters were displaying hardcoded 'v1.0.0'
instead of actual AgentReady and research versions from metadata.

**Changes**:
- Markdown footer: Use metadata.agentready_version and metadata.research_version
- HTML footer: Same + add assessed_by and assessment_timestamp_human
- Now shows accurate version info for reproducibility

**Before**:
- Tool Version: AgentReady v1.0.0
- Research Report: Bundled version

**After**:
- AgentReady Version: v2.8.1
- Research Version: v1.2.0
- Assessed By: jeder@hostname
- Assessment Date: December 3, 2025 at 2:30 PM

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 1a4db542
Assessed: December 03, 2025 at 10:07 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.7s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,351

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • AgentReady Version: v2.8.1
  • Research Version: v1.0.0
  • Repository Snapshot: 1a4db54
  • Assessment Duration: 1.7s
  • Assessed By: runner@runnervmoqczp
  • Assessment Date: December 03, 2025 at 10:07 PM

🤖 Generated with Claude Code

- Add research_version parameter to all AssessmentMetadata.create() calls in tests
- Add graceful fallback for None metadata in Markdown reporter footer
- Add conditional check for None metadata in HTML template footer
- Fixes test failures from metadata signature change
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 364b5e2c
Assessed: December 03, 2025 at 10:18 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.3s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,372

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1192 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • AgentReady Version: v2.8.1
  • Research Version: v1.0.0
  • Repository Snapshot: 364b5e2
  • Assessment Duration: 1.3s
  • Assessed By: runner@runnervmoqczp
  • Assessment Date: December 03, 2025 at 10:18 PM

🤖 Generated with Claude Code

@jeremyeder jeremyeder merged commit fea0b3e into main Dec 3, 2025
9 of 11 checks passed
github-actions bot pushed a commit that referenced this pull request Dec 3, 2025
# [2.9.0](v2.8.1...v2.9.0) (2025-12-03)

### Features

* Community Leaderboard for AgentReady Scores ([#146](#146)) ([fea0b3e](fea0b3e))
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🎉 This PR is included in version 2.9.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

github-actions bot pushed a commit to chambridge/agentready that referenced this pull request Jan 14, 2026
# 1.0.0 (2026-01-14)

### Bug Fixes

* add bounded retry logic for LLM rate limit handling ([ambient-code#205](https://github.com/chambridge/agentready/issues/205)) ([6ecb786](6ecb786)), closes [ambient-code#104](https://github.com/chambridge/agentready/issues/104)
* Add comprehensive subprocess security guardrails (fixes [ambient-code#57](https://github.com/chambridge/agentready/issues/57)) ([ambient-code#66](https://github.com/chambridge/agentready/issues/66)) ([454b80e](454b80e))
* Add comprehensive YAML validation to prevent attacks (fixes [ambient-code#56](https://github.com/chambridge/agentready/issues/56)) ([ambient-code#63](https://github.com/chambridge/agentready/issues/63)) ([31ecb3a](31ecb3a))
* add repository checkout step to Claude Code Action workflow ([17aa0cf](17aa0cf))
* add uv.lock to recognized lockfiles ([ambient-code#143](https://github.com/chambridge/agentready/issues/143)) ([a98dc87](a98dc87)), closes [ambient-code#137](https://github.com/chambridge/agentready/issues/137)
* address P1 code quality issues from code review ([ambient-code#36](https://github.com/chambridge/agentready/issues/36)) ([5976332](5976332))
* address P1 code quality issues from code review ([ambient-code#37](https://github.com/chambridge/agentready/issues/37)) ([4be1d5e](4be1d5e))
* address P1 code quality issues from code review ([ambient-code#38](https://github.com/chambridge/agentready/issues/38)) ([77f2300](77f2300))
* **assessors:** search recursively for OpenAPI specification files ([ambient-code#127](https://github.com/chambridge/agentready/issues/127)) ([e2a5778](e2a5778))
* correct Assessment field name in demo command ([ambient-code#41](https://github.com/chambridge/agentready/issues/41)) ([b48622d](b48622d)), closes [ambient-code#12](https://github.com/chambridge/agentready/issues/12)
* Correct datetime import pattern in RepomixService ([ambient-code#65](https://github.com/chambridge/agentready/issues/65)) ([517aa6e](517aa6e))
* correct GitHub repository link in site navigation ([5492278](5492278))
* correct Liquid syntax in developer-guide (elif -> elsif) ([75f3b1d](75f3b1d))
* Create shared test fixtures and fix Assessment schema issues ([ambient-code#114](https://github.com/chambridge/agentready/issues/114)) ([46baa13](46baa13))
* disable attestations for Test PyPI to avoid conflict ([ambient-code#155](https://github.com/chambridge/agentready/issues/155)) ([a33e3cd](a33e3cd)), closes [pypa/#action-pypi-publish](https://github.com/chambridge/agentready/issues/action-pypi-publish)
* downgrade docker/metadata-action to v5 and fix shellcheck warnings ([12f5509](12f5509))
* enable Harbor task filtering for smoketest support ([ambient-code#222](https://github.com/chambridge/agentready/issues/222)) ([f780188](f780188))
* exclude DEPLOYMENT.md and SETUP_SUMMARY.md from Jekyll build ([9611207](9611207))
* Improve report metadata display with clean table format ([ca361a4](ca361a4))
* leaderboard workflow and SSH URL support ([ambient-code#147](https://github.com/chambridge/agentready/issues/147)) ([de28cd0](de28cd0))
* make E2E test timeouts configurable and add sensitive directory test ([ambient-code#206](https://github.com/chambridge/agentready/issues/206)) ([27e87e5](27e87e5)), closes [ambient-code#104](https://github.com/chambridge/agentready/issues/104) [ambient-code#192](https://github.com/chambridge/agentready/issues/192)
* P0 security and logic bugs from code review ([2af2346](2af2346))
* Prevent API key exposure in environment and logs (fixes [ambient-code#55](https://github.com/chambridge/agentready/issues/55)) ([ambient-code#64](https://github.com/chambridge/agentready/issues/64)) ([4d1d001](4d1d001))
* Prevent command injection in CommandFix.apply() (fixes [ambient-code#52](https://github.com/chambridge/agentready/issues/52)) ([ambient-code#60](https://github.com/chambridge/agentready/issues/60)) ([49be28e](49be28e))
* Prevent path traversal in LLM cache (fixes [ambient-code#53](https://github.com/chambridge/agentready/issues/53)) ([ambient-code#61](https://github.com/chambridge/agentready/issues/61)) ([2bf052d](2bf052d))
* Prevent XSS in HTML reports (fixes [ambient-code#54](https://github.com/chambridge/agentready/issues/54)) ([ambient-code#62](https://github.com/chambridge/agentready/issues/62)) ([7c60c69](7c60c69))
* rename research report in data directory ([b8ddfdc](b8ddfdc))
* replace all remaining elif with elsif in developer-guide ([73f16fc](73f16fc))
* Resolve 35 pytest failures through model validation and path sanitization improvements ([ambient-code#115](https://github.com/chambridge/agentready/issues/115)) ([4fbfee0](4fbfee0))
* resolve all test suite failures - achieve zero failures ([ambient-code#180](https://github.com/chambridge/agentready/issues/180)) ([990fa2d](990fa2d)), closes [ambient-code#148](https://github.com/chambridge/agentready/issues/148) [ambient-code#147](https://github.com/chambridge/agentready/issues/147) [ambient-code#145](https://github.com/chambridge/agentready/issues/145)
* resolve broken links and workflow failures ([ambient-code#160](https://github.com/chambridge/agentready/issues/160)) ([fbf5cf7](fbf5cf7))
* Resolve merge conflicts in CLI main module ([ambient-code#59](https://github.com/chambridge/agentready/issues/59)) ([9e0bf2d](9e0bf2d))
* resolve YAML syntax error in continuous-learning workflow ([ambient-code#172](https://github.com/chambridge/agentready/issues/172)) ([3d40fcc](3d40fcc))
* resolve YAML syntax error in update-docs workflow and add actionlint ([ambient-code#173](https://github.com/chambridge/agentready/issues/173)) ([97b06af](97b06af))
* Sanitize sensitive data in HTML reports (fixes [ambient-code#58](https://github.com/chambridge/agentready/issues/58)) ([ambient-code#67](https://github.com/chambridge/agentready/issues/67)) ([6fbac76](6fbac76))
* set correct baseurl for GitHub Pages subdirectory deployment ([c4db765](c4db765))
* skip PR comments for external forks to prevent permission errors ([ambient-code#163](https://github.com/chambridge/agentready/issues/163)) ([2a29fb8](2a29fb8))
* update --version flag to show correct version and research report date ([ambient-code#221](https://github.com/chambridge/agentready/issues/221)) ([5a85abb](5a85abb))
* Update Claude workflow to trigger on [@claude](https://github.com/claude) mentions ([ambient-code#35](https://github.com/chambridge/agentready/issues/35)) ([a8a3fab](a8a3fab))
* **workflows:** ensure post-comment step runs after Claude Code Action ([b087e5c](b087e5c))
* **workflows:** handle all event types in agentready-dev workflow ([9b942bf](9b942bf))
* **workflows:** improve error handling and logging for comment posting ([9ea1e6b](9ea1e6b))
* **workflows:** improve issue number extraction and add debug step ([ecd896b](ecd896b))
* **workflows:** remove if:always() to test step execution ([ff0bb12](ff0bb12))
* **workflows:** simplify post-comment step condition ([1bbf40a](1bbf40a))

### Features

* add agentready-dev Claude agent specification ([ambient-code#44](https://github.com/chambridge/agentready/issues/44)) ([0f61f5c](0f61f5c))
* add ambient-code/agentready to leaderboard ([ambient-code#148](https://github.com/chambridge/agentready/issues/148)) ([621152e](621152e))
* Add automated demo command for AgentReady ([ambient-code#24](https://github.com/chambridge/agentready/issues/24)) ([f4e89d9](f4e89d9)), closes [ambient-code#1](https://github.com/chambridge/agentready/issues/1) [ambient-code#25](https://github.com/chambridge/agentready/issues/25) [hi#quality](https://github.com/hi/issues/quality) [hi#scoring](https://github.com/hi/issues/scoring)
* add Claude Code GitHub Action for [@claude](https://github.com/claude) mentions ([3e7224d](3e7224d))
* Add comprehensive unit tests for utility modules (privacy.py and subprocess_utils.py) ([ambient-code#111](https://github.com/chambridge/agentready/issues/111)) ([9d3dece](9d3dece))
* Add customizable HTML report themes with runtime switching ([ambient-code#46](https://github.com/chambridge/agentready/issues/46)) ([7eeaf84](7eeaf84)), closes [hi#contrast](https://github.com/hi/issues/contrast) [ambient-code#10](https://github.com/chambridge/agentready/issues/10)
* Add Doubleagent - specialized AgentReady development agent ([ambient-code#30](https://github.com/chambridge/agentready/issues/30)) ([0ab54cb](0ab54cb))
* add GitHub organization scanning to assess-batch command ([ambient-code#118](https://github.com/chambridge/agentready/issues/118)) ([e306314](e306314))
* add Harbor Terminal-Bench comparison for agent effectiveness ([ambient-code#199](https://github.com/chambridge/agentready/issues/199)) ([a56e318](a56e318))
* Add Interactive Dashboard backlog item ([adfc4c8](adfc4c8))
* add interactive heatmap visualization for batch assessments ([ambient-code#136](https://github.com/chambridge/agentready/issues/136)) ([4d44fc3](4d44fc3))
* Add interactive HTML report generation ([18664ea](18664ea))
* add Memory MCP server allow list to repository settings ([ambient-code#203](https://github.com/chambridge/agentready/issues/203)) ([41d87bb](41d87bb))
* add quay/quay to leaderboard ([ambient-code#162](https://github.com/chambridge/agentready/issues/162)) ([d6e8df0](d6e8df0))
* add release pipeline coldstart prompt ([ambient-code#19](https://github.com/chambridge/agentready/issues/19)) ([9a3880c](9a3880c)), closes [ambient-code#18](https://github.com/chambridge/agentready/issues/18)
* Add Repomix integration for AI-friendly repository context generation ([ambient-code#29](https://github.com/chambridge/agentready/issues/29)) ([92bdde1](92bdde1)), closes [ambient-code#24](https://github.com/chambridge/agentready/issues/24) [ambient-code#1](https://github.com/chambridge/agentready/issues/1) [ambient-code#25](https://github.com/chambridge/agentready/issues/25) [hi#quality](https://github.com/hi/issues/quality) [hi#scoring](https://github.com/hi/issues/scoring)
* add report header with repository metadata ([ambient-code#28](https://github.com/chambridge/agentready/issues/28)) ([7a8b34a](7a8b34a))
* Add research report management CLI commands ([ambient-code#45](https://github.com/chambridge/agentready/issues/45)) ([e1be488](e1be488)), closes [ambient-code#7](https://github.com/chambridge/agentready/issues/7)
* Add security & quality improvements from code review ([ambient-code#40](https://github.com/chambridge/agentready/issues/40)) ([13cd3ca](13cd3ca))
* Add security & quality improvements from code review ([ambient-code#49](https://github.com/chambridge/agentready/issues/49)) ([889d6ed](889d6ed))
* Add SWE-bench experiment system for validating AgentReady impact ([ambient-code#124](https://github.com/chambridge/agentready/issues/124)) ([15edbba](15edbba))
* Add weekly research update skill and automation ([ambient-code#145](https://github.com/chambridge/agentready/issues/145)) ([7ba17a6](7ba17a6))
* **assessors:** implement File Size Limits assessor (Tier 2) ([ambient-code#141](https://github.com/chambridge/agentready/issues/141)) ([248467f](248467f))
* Auto-sync CLAUDE.md during semantic-release ([ambient-code#101](https://github.com/chambridge/agentready/issues/101)) ([36b48cb](36b48cb))
* automate PyPI publishing with trusted publishing (OIDC) ([ambient-code#154](https://github.com/chambridge/agentready/issues/154)) ([71f4632](71f4632)), closes [pypa/#action-pypi-publish](https://github.com/chambridge/agentready/issues/action-pypi-publish)
* Batch Report Enhancements + Bootstrap Template Inheritance (Phase 2 Task 5) ([ambient-code#133](https://github.com/chambridge/agentready/issues/133)) ([7762b23](7762b23))
* Community Leaderboard for AgentReady Scores ([ambient-code#146](https://github.com/chambridge/agentready/issues/146)) ([fea0b3e](fea0b3e))
* Complete Phases 5-7 - Markdown reports, testing, and polish ([7659623](7659623))
* consolidate GitHub Actions workflows by purpose ([ambient-code#217](https://github.com/chambridge/agentready/issues/217)) ([717ca6b](717ca6b)), closes [ambient-code#221](https://github.com/chambridge/agentready/issues/221)
* container support ([ambient-code#171](https://github.com/chambridge/agentready/issues/171)) ([c6874ea](c6874ea))
* convert AgentReady assessment to on-demand workflow ([ambient-code#213](https://github.com/chambridge/agentready/issues/213)) ([b5a1ce0](b5a1ce0)), closes [ambient-code#191](https://github.com/chambridge/agentready/issues/191)
* enhance assessors with multi-language support and security ([ambient-code#200](https://github.com/chambridge/agentready/issues/200)) ([85712f2](85712f2)), closes [ambient-code#10](https://github.com/chambridge/agentready/issues/10)
* Harbor framework integration for Terminal-Bench evaluations ([ambient-code#202](https://github.com/chambridge/agentready/issues/202)) ([d73a8c8](d73a8c8)), closes [ambient-code#4](https://github.com/chambridge/agentready/issues/4) [ambient-code#178](https://github.com/chambridge/agentready/issues/178) [ambient-code#178](https://github.com/chambridge/agentready/issues/178)
* Implement AgentReady MVP with scoring engine ([54a96cb](54a96cb))
* Implement align subcommand for automated remediation (Issue [ambient-code#14](https://github.com/chambridge/agentready/issues/14)) ([ambient-code#34](https://github.com/chambridge/agentready/issues/34)) ([06f04dc](06f04dc))
* Implement ArchitectureDecisionsAssessor (fixes [ambient-code#81](https://github.com/chambridge/agentready/issues/81)) ([ambient-code#89](https://github.com/chambridge/agentready/issues/89)) ([9e782e5](9e782e5))
* implement automated semantic release pipeline ([ambient-code#20](https://github.com/chambridge/agentready/issues/20)) ([b579235](b579235))
* implement bootstrap command for GitHub infrastructure ([0af06c4](0af06c4)), closes [ambient-code#2](https://github.com/chambridge/agentready/issues/2)
* Implement BranchProtectionAssessor stub (fixes [ambient-code#86](https://github.com/chambridge/agentready/issues/86)) ([ambient-code#98](https://github.com/chambridge/agentready/issues/98)) ([44c4b17](44c4b17))
* Implement CICDPipelineVisibilityAssessor (fixes [ambient-code#85](https://github.com/chambridge/agentready/issues/85)) ([ambient-code#91](https://github.com/chambridge/agentready/issues/91)) ([e68285c](e68285c))
* Implement CodeSmellsAssessor stub (fixes [ambient-code#87](https://github.com/chambridge/agentready/issues/87)) ([ambient-code#99](https://github.com/chambridge/agentready/issues/99)) ([f06b2a8](f06b2a8))
* Implement ConciseDocumentationAssessor (fixes [ambient-code#76](https://github.com/chambridge/agentready/issues/76)) ([ambient-code#93](https://github.com/chambridge/agentready/issues/93)) ([c356cd5](c356cd5))
* Implement InlineDocumentationAssessor (fixes [ambient-code#77](https://github.com/chambridge/agentready/issues/77)) ([ambient-code#94](https://github.com/chambridge/agentready/issues/94)) ([e56e570](e56e570))
* Implement IssuePRTemplatesAssessor (fixes [ambient-code#84](https://github.com/chambridge/agentready/issues/84)) ([ambient-code#90](https://github.com/chambridge/agentready/issues/90)) ([819d7b7](819d7b7))
* Implement multi-repository batch assessment (Phase 1 of issue [ambient-code#68](https://github.com/chambridge/agentready/issues/68)) ([ambient-code#74](https://github.com/chambridge/agentready/issues/74)) ([befc0d5](befc0d5))
* Implement OneCommandSetupAssessor (fixes [ambient-code#75](https://github.com/chambridge/agentready/issues/75)) ([ambient-code#88](https://github.com/chambridge/agentready/issues/88)) ([668ba1b](668ba1b))
* Implement OpenAPISpecsAssessor (fixes [ambient-code#80](https://github.com/chambridge/agentready/issues/80)) ([ambient-code#97](https://github.com/chambridge/agentready/issues/97)) ([45ae36e](45ae36e))
* implement Phase 2 multi-repository assessment reporting ([ambient-code#117](https://github.com/chambridge/agentready/issues/117)) ([8da56c2](8da56c2)), closes [ambient-code#69](https://github.com/chambridge/agentready/issues/69)
* implement report schema versioning ([ambient-code#43](https://github.com/chambridge/agentready/issues/43)) ([4c4752c](4c4752c))
* Implement SemanticNamingAssessor (fixes [ambient-code#82](https://github.com/chambridge/agentready/issues/82)) ([ambient-code#95](https://github.com/chambridge/agentready/issues/95)) ([d87a280](d87a280))
* Implement SeparationOfConcernsAssessor (fixes [ambient-code#78](https://github.com/chambridge/agentready/issues/78)) ([ambient-code#92](https://github.com/chambridge/agentready/issues/92)) ([99bfe28](99bfe28))
* Implement StructuredLoggingAssessor (fixes [ambient-code#79](https://github.com/chambridge/agentready/issues/79)) ([ambient-code#96](https://github.com/chambridge/agentready/issues/96)) ([2b87ca7](2b87ca7))
* Phase 1 Task 1 - Consolidate Security Validation Patterns ([ambient-code#129](https://github.com/chambridge/agentready/issues/129)) ([8580c45](8580c45)), closes [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122)
* Phase 1 Tasks 2-3 - Consolidate Reporter Base & Assessor Factory ([ambient-code#131](https://github.com/chambridge/agentready/issues/131)) ([8e12bf9](8e12bf9)), closes [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122)
* Phase 2 Task 4 - Replace manual config validation with Pydantic ([ambient-code#134](https://github.com/chambridge/agentready/issues/134)) ([d83cf58](d83cf58))
* Redesign homepage features with two-column layout and research links ([ambient-code#189](https://github.com/chambridge/agentready/issues/189)) ([570087d](570087d)), closes [ambient-code#187](https://github.com/chambridge/agentready/issues/187)
* redesign HTML report with dark theme and larger fonts ([ambient-code#39](https://github.com/chambridge/agentready/issues/39)) ([59f6702](59f6702)), closes [#8b5cf6](https://github.com/chambridge/agentready/issues/8b5cf6) [#XX](https://github.com/chambridge/agentready/issues/XX)
* Rename 'learn' command to 'extract-skills' for clarity ([ambient-code#125](https://github.com/chambridge/agentready/issues/125)) ([64d6563](64d6563)), closes [hi#scoring](https://github.com/hi/issues/scoring) [ambient-code#123](https://github.com/chambridge/agentready/issues/123)
* replace markdown-link-check with lychee for link validation ([ambient-code#177](https://github.com/chambridge/agentready/issues/177)) ([f1a4545](f1a4545))
* Standardize on Python 3.12+ with forward compatibility for 3.13 ([ambient-code#132](https://github.com/chambridge/agentready/issues/132)) ([84f2c46](84f2c46))
* Terminal-Bench eval harness (MVP Phase 1) ([ambient-code#178](https://github.com/chambridge/agentready/issues/178)) ([d06bab4](d06bab4)), closes [ambient-code#171](https://github.com/chambridge/agentready/issues/171)
* **workflows:** add comment posting for [@agentready-dev](https://github.com/agentready-dev) agent ([5dff614](5dff614))

### Performance Improvements

* implement lazy loading for heavy CLI commands ([ambient-code#151](https://github.com/chambridge/agentready/issues/151)) ([6a7cd4e](6a7cd4e))

### BREAKING CHANGES

* Users must update scripts from 'agentready learn'
to 'agentready extract-skills'. All flags and options remain identical.
github-actions bot pushed a commit to chambridge/agentready that referenced this pull request Jan 16, 2026
# 1.0.0 (2026-01-16)

### Bug Fixes

* add bounded retry logic for LLM rate limit handling ([ambient-code#205](https://github.com/chambridge/agentready/issues/205)) ([6ecb786](6ecb786)), closes [ambient-code#104](https://github.com/chambridge/agentready/issues/104)
* Add comprehensive subprocess security guardrails (fixes [ambient-code#57](https://github.com/chambridge/agentready/issues/57)) ([ambient-code#66](https://github.com/chambridge/agentready/issues/66)) ([454b80e](454b80e))
* Add comprehensive YAML validation to prevent attacks (fixes [ambient-code#56](https://github.com/chambridge/agentready/issues/56)) ([ambient-code#63](https://github.com/chambridge/agentready/issues/63)) ([31ecb3a](31ecb3a))
* add repository checkout step to Claude Code Action workflow ([17aa0cf](17aa0cf))
* add uv.lock to recognized lockfiles ([ambient-code#143](https://github.com/chambridge/agentready/issues/143)) ([a98dc87](a98dc87)), closes [ambient-code#137](https://github.com/chambridge/agentready/issues/137)
* address P1 code quality issues from code review ([ambient-code#36](https://github.com/chambridge/agentready/issues/36)) ([5976332](5976332))
* address P1 code quality issues from code review ([ambient-code#37](https://github.com/chambridge/agentready/issues/37)) ([4be1d5e](4be1d5e))
* address P1 code quality issues from code review ([ambient-code#38](https://github.com/chambridge/agentready/issues/38)) ([77f2300](77f2300))
* **assessors:** FileSizeLimitsAssessor now respects .gitignore ([ambient-code#248](https://github.com/chambridge/agentready/issues/248)) ([eaaecc2](eaaecc2)), closes [ambient-code#245](https://github.com/chambridge/agentready/issues/245)
* **assessors:** search recursively for OpenAPI specification files ([ambient-code#127](https://github.com/chambridge/agentready/issues/127)) ([e2a5778](e2a5778))
* **ci:** use gh pr view for fork PR number lookup in coverage comment ([ambient-code#253](https://github.com/chambridge/agentready/issues/253)) ([1688362](1688362))
* correct Assessment field name in demo command ([ambient-code#41](https://github.com/chambridge/agentready/issues/41)) ([b48622d](b48622d)), closes [ambient-code#12](https://github.com/chambridge/agentready/issues/12)
* Correct datetime import pattern in RepomixService ([ambient-code#65](https://github.com/chambridge/agentready/issues/65)) ([517aa6e](517aa6e))
* correct GitHub repository link in site navigation ([5492278](5492278))
* correct Liquid syntax in developer-guide (elif -> elsif) ([75f3b1d](75f3b1d))
* Create shared test fixtures and fix Assessment schema issues ([ambient-code#114](https://github.com/chambridge/agentready/issues/114)) ([46baa13](46baa13))
* disable attestations for Test PyPI to avoid conflict ([ambient-code#155](https://github.com/chambridge/agentready/issues/155)) ([a33e3cd](a33e3cd)), closes [pypa/#action-pypi-publish](https://github.com/chambridge/agentready/issues/action-pypi-publish)
* downgrade docker/metadata-action to v5 and fix shellcheck warnings ([12f5509](12f5509))
* enable Harbor task filtering for smoketest support ([ambient-code#222](https://github.com/chambridge/agentready/issues/222)) ([f780188](f780188))
* exclude DEPLOYMENT.md and SETUP_SUMMARY.md from Jekyll build ([9611207](9611207))
* Improve report metadata display with clean table format ([ca361a4](ca361a4))
* leaderboard workflow and SSH URL support ([ambient-code#147](https://github.com/chambridge/agentready/issues/147)) ([de28cd0](de28cd0))
* make E2E test timeouts configurable and add sensitive directory test ([ambient-code#206](https://github.com/chambridge/agentready/issues/206)) ([27e87e5](27e87e5)), closes [ambient-code#104](https://github.com/chambridge/agentready/issues/104) [ambient-code#192](https://github.com/chambridge/agentready/issues/192)
* P0 security and logic bugs from code review ([2af2346](2af2346))
* Prevent API key exposure in environment and logs (fixes [ambient-code#55](https://github.com/chambridge/agentready/issues/55)) ([ambient-code#64](https://github.com/chambridge/agentready/issues/64)) ([4d1d001](4d1d001))
* Prevent command injection in CommandFix.apply() (fixes [ambient-code#52](https://github.com/chambridge/agentready/issues/52)) ([ambient-code#60](https://github.com/chambridge/agentready/issues/60)) ([49be28e](49be28e))
* Prevent path traversal in LLM cache (fixes [ambient-code#53](https://github.com/chambridge/agentready/issues/53)) ([ambient-code#61](https://github.com/chambridge/agentready/issues/61)) ([2bf052d](2bf052d))
* prevent unauthorized message for non-command comments ([ambient-code#262](https://github.com/chambridge/agentready/issues/262)) ([84c6f69](84c6f69))
* Prevent XSS in HTML reports (fixes [ambient-code#54](https://github.com/chambridge/agentready/issues/54)) ([ambient-code#62](https://github.com/chambridge/agentready/issues/62)) ([7c60c69](7c60c69))
* rename research report in data directory ([b8ddfdc](b8ddfdc))
* replace all remaining elif with elsif in developer-guide ([73f16fc](73f16fc))
* Resolve 35 pytest failures through model validation and path sanitization improvements ([ambient-code#115](https://github.com/chambridge/agentready/issues/115)) ([4fbfee0](4fbfee0))
* resolve all test suite failures - achieve zero failures ([ambient-code#180](https://github.com/chambridge/agentready/issues/180)) ([990fa2d](990fa2d)), closes [ambient-code#148](https://github.com/chambridge/agentready/issues/148) [ambient-code#147](https://github.com/chambridge/agentready/issues/147) [ambient-code#145](https://github.com/chambridge/agentready/issues/145)
* resolve broken links and workflow failures ([ambient-code#160](https://github.com/chambridge/agentready/issues/160)) ([fbf5cf7](fbf5cf7))
* Resolve merge conflicts in CLI main module ([ambient-code#59](https://github.com/chambridge/agentready/issues/59)) ([9e0bf2d](9e0bf2d))
* resolve YAML syntax error in continuous-learning workflow ([ambient-code#172](https://github.com/chambridge/agentready/issues/172)) ([3d40fcc](3d40fcc))
* resolve YAML syntax error in update-docs workflow and add actionlint ([ambient-code#173](https://github.com/chambridge/agentready/issues/173)) ([97b06af](97b06af))
* Sanitize sensitive data in HTML reports (fixes [ambient-code#58](https://github.com/chambridge/agentready/issues/58)) ([ambient-code#67](https://github.com/chambridge/agentready/issues/67)) ([6fbac76](6fbac76))
* set correct baseurl for GitHub Pages subdirectory deployment ([c4db765](c4db765))
* skip PR comments for external forks to prevent permission errors ([ambient-code#163](https://github.com/chambridge/agentready/issues/163)) ([2a29fb8](2a29fb8))
* update --version flag to show correct version and research report date ([ambient-code#221](https://github.com/chambridge/agentready/issues/221)) ([5a85abb](5a85abb))
* Update Claude workflow to trigger on [@claude](https://github.com/claude) mentions ([ambient-code#35](https://github.com/chambridge/agentready/issues/35)) ([a8a3fab](a8a3fab))
* **workflows:** ensure post-comment step runs after Claude Code Action ([b087e5c](b087e5c))
* **workflows:** handle all event types in agentready-dev workflow ([9b942bf](9b942bf))
* **workflows:** improve error handling and logging for comment posting ([9ea1e6b](9ea1e6b))
* **workflows:** improve issue number extraction and add debug step ([ecd896b](ecd896b))
* **workflows:** remove if:always() to test step execution ([ff0bb12](ff0bb12))
* **workflows:** simplify post-comment step condition ([1bbf40a](1bbf40a))

### Features

* add agentready-dev Claude agent specification ([ambient-code#44](https://github.com/chambridge/agentready/issues/44)) ([0f61f5c](0f61f5c))
* add ambient-code/agentready to leaderboard ([ambient-code#148](https://github.com/chambridge/agentready/issues/148)) ([621152e](621152e))
* Add automated demo command for AgentReady ([ambient-code#24](https://github.com/chambridge/agentready/issues/24)) ([f4e89d9](f4e89d9)), closes [ambient-code#1](https://github.com/chambridge/agentready/issues/1) [ambient-code#25](https://github.com/chambridge/agentready/issues/25) [hi#quality](https://github.com/hi/issues/quality) [hi#scoring](https://github.com/hi/issues/scoring)
* add Claude Code GitHub Action for [@claude](https://github.com/claude) mentions ([3e7224d](3e7224d))
* Add comprehensive unit tests for utility modules (privacy.py and subprocess_utils.py) ([ambient-code#111](https://github.com/chambridge/agentready/issues/111)) ([9d3dece](9d3dece))
* Add customizable HTML report themes with runtime switching ([ambient-code#46](https://github.com/chambridge/agentready/issues/46)) ([7eeaf84](7eeaf84)), closes [hi#contrast](https://github.com/hi/issues/contrast) [ambient-code#10](https://github.com/chambridge/agentready/issues/10)
* Add Doubleagent - specialized AgentReady development agent ([ambient-code#30](https://github.com/chambridge/agentready/issues/30)) ([0ab54cb](0ab54cb))
* add GitHub organization scanning to assess-batch command ([ambient-code#118](https://github.com/chambridge/agentready/issues/118)) ([e306314](e306314))
* add Harbor Terminal-Bench comparison for agent effectiveness ([ambient-code#199](https://github.com/chambridge/agentready/issues/199)) ([a56e318](a56e318))
* Add Interactive Dashboard backlog item ([adfc4c8](adfc4c8))
* add interactive heatmap visualization for batch assessments ([ambient-code#136](https://github.com/chambridge/agentready/issues/136)) ([4d44fc3](4d44fc3))
* Add interactive HTML report generation ([18664ea](18664ea))
* add Memory MCP server allow list to repository settings ([ambient-code#203](https://github.com/chambridge/agentready/issues/203)) ([41d87bb](41d87bb))
* add quay/quay to leaderboard ([ambient-code#162](https://github.com/chambridge/agentready/issues/162)) ([d6e8df0](d6e8df0))
* add release pipeline coldstart prompt ([ambient-code#19](https://github.com/chambridge/agentready/issues/19)) ([9a3880c](9a3880c)), closes [ambient-code#18](https://github.com/chambridge/agentready/issues/18)
* Add Repomix integration for AI-friendly repository context generation ([ambient-code#29](https://github.com/chambridge/agentready/issues/29)) ([92bdde1](92bdde1)), closes [ambient-code#24](https://github.com/chambridge/agentready/issues/24) [ambient-code#1](https://github.com/chambridge/agentready/issues/1) [ambient-code#25](https://github.com/chambridge/agentready/issues/25) [hi#quality](https://github.com/hi/issues/quality) [hi#scoring](https://github.com/hi/issues/scoring)
* add report header with repository metadata ([ambient-code#28](https://github.com/chambridge/agentready/issues/28)) ([7a8b34a](7a8b34a))
* Add research report management CLI commands ([ambient-code#45](https://github.com/chambridge/agentready/issues/45)) ([e1be488](e1be488)), closes [ambient-code#7](https://github.com/chambridge/agentready/issues/7)
* Add security & quality improvements from code review ([ambient-code#40](https://github.com/chambridge/agentready/issues/40)) ([13cd3ca](13cd3ca))
* Add security & quality improvements from code review ([ambient-code#49](https://github.com/chambridge/agentready/issues/49)) ([889d6ed](889d6ed))
* Add SWE-bench experiment system for validating AgentReady impact ([ambient-code#124](https://github.com/chambridge/agentready/issues/124)) ([15edbba](15edbba))
* Add weekly research update skill and automation ([ambient-code#145](https://github.com/chambridge/agentready/issues/145)) ([7ba17a6](7ba17a6))
* **assessors:** implement File Size Limits assessor (Tier 2) ([ambient-code#141](https://github.com/chambridge/agentready/issues/141)) ([248467f](248467f))
* Auto-sync CLAUDE.md during semantic-release ([ambient-code#101](https://github.com/chambridge/agentready/issues/101)) ([36b48cb](36b48cb))
* automate PyPI publishing with trusted publishing (OIDC) ([ambient-code#154](https://github.com/chambridge/agentready/issues/154)) ([71f4632](71f4632)), closes [pypa/#action-pypi-publish](https://github.com/chambridge/agentready/issues/action-pypi-publish)
* Batch Report Enhancements + Bootstrap Template Inheritance (Phase 2 Task 5) ([ambient-code#133](https://github.com/chambridge/agentready/issues/133)) ([7762b23](7762b23))
* Community Leaderboard for AgentReady Scores ([ambient-code#146](https://github.com/chambridge/agentready/issues/146)) ([fea0b3e](fea0b3e))
* Complete Phases 5-7 - Markdown reports, testing, and polish ([7659623](7659623))
* consolidate GitHub Actions workflows by purpose ([ambient-code#217](https://github.com/chambridge/agentready/issues/217)) ([717ca6b](717ca6b)), closes [ambient-code#221](https://github.com/chambridge/agentready/issues/221)
* container support ([ambient-code#171](https://github.com/chambridge/agentready/issues/171)) ([c6874ea](c6874ea))
* convert AgentReady assessment to on-demand workflow ([ambient-code#213](https://github.com/chambridge/agentready/issues/213)) ([b5a1ce0](b5a1ce0)), closes [ambient-code#191](https://github.com/chambridge/agentready/issues/191)
* enhance assessors with multi-language support and security ([ambient-code#200](https://github.com/chambridge/agentready/issues/200)) ([85712f2](85712f2)), closes [ambient-code#10](https://github.com/chambridge/agentready/issues/10)
* Harbor framework integration for Terminal-Bench evaluations ([ambient-code#202](https://github.com/chambridge/agentready/issues/202)) ([d73a8c8](d73a8c8)), closes [ambient-code#4](https://github.com/chambridge/agentready/issues/4) [ambient-code#178](https://github.com/chambridge/agentready/issues/178) [ambient-code#178](https://github.com/chambridge/agentready/issues/178)
* Implement AgentReady MVP with scoring engine ([54a96cb](54a96cb))
* Implement align subcommand for automated remediation (Issue [ambient-code#14](https://github.com/chambridge/agentready/issues/14)) ([ambient-code#34](https://github.com/chambridge/agentready/issues/34)) ([06f04dc](06f04dc))
* Implement ArchitectureDecisionsAssessor (fixes [ambient-code#81](https://github.com/chambridge/agentready/issues/81)) ([ambient-code#89](https://github.com/chambridge/agentready/issues/89)) ([9e782e5](9e782e5))
* implement automated semantic release pipeline ([ambient-code#20](https://github.com/chambridge/agentready/issues/20)) ([b579235](b579235))
* implement bootstrap command for GitHub infrastructure ([0af06c4](0af06c4)), closes [ambient-code#2](https://github.com/chambridge/agentready/issues/2)
* Implement BranchProtectionAssessor stub (fixes [ambient-code#86](https://github.com/chambridge/agentready/issues/86)) ([ambient-code#98](https://github.com/chambridge/agentready/issues/98)) ([44c4b17](44c4b17))
* Implement CICDPipelineVisibilityAssessor (fixes [ambient-code#85](https://github.com/chambridge/agentready/issues/85)) ([ambient-code#91](https://github.com/chambridge/agentready/issues/91)) ([e68285c](e68285c))
* Implement CodeSmellsAssessor stub (fixes [ambient-code#87](https://github.com/chambridge/agentready/issues/87)) ([ambient-code#99](https://github.com/chambridge/agentready/issues/99)) ([f06b2a8](f06b2a8))
* Implement ConciseDocumentationAssessor (fixes [ambient-code#76](https://github.com/chambridge/agentready/issues/76)) ([ambient-code#93](https://github.com/chambridge/agentready/issues/93)) ([c356cd5](c356cd5))
* Implement InlineDocumentationAssessor (fixes [ambient-code#77](https://github.com/chambridge/agentready/issues/77)) ([ambient-code#94](https://github.com/chambridge/agentready/issues/94)) ([e56e570](e56e570))
* Implement IssuePRTemplatesAssessor (fixes [ambient-code#84](https://github.com/chambridge/agentready/issues/84)) ([ambient-code#90](https://github.com/chambridge/agentready/issues/90)) ([819d7b7](819d7b7))
* Implement multi-repository batch assessment (Phase 1 of issue [ambient-code#68](https://github.com/chambridge/agentready/issues/68)) ([ambient-code#74](https://github.com/chambridge/agentready/issues/74)) ([befc0d5](befc0d5))
* Implement OneCommandSetupAssessor (fixes [ambient-code#75](https://github.com/chambridge/agentready/issues/75)) ([ambient-code#88](https://github.com/chambridge/agentready/issues/88)) ([668ba1b](668ba1b))
* Implement OpenAPISpecsAssessor (fixes [ambient-code#80](https://github.com/chambridge/agentready/issues/80)) ([ambient-code#97](https://github.com/chambridge/agentready/issues/97)) ([45ae36e](45ae36e))
* implement Phase 2 multi-repository assessment reporting ([ambient-code#117](https://github.com/chambridge/agentready/issues/117)) ([8da56c2](8da56c2)), closes [ambient-code#69](https://github.com/chambridge/agentready/issues/69)
* implement report schema versioning ([ambient-code#43](https://github.com/chambridge/agentready/issues/43)) ([4c4752c](4c4752c))
* Implement SemanticNamingAssessor (fixes [ambient-code#82](https://github.com/chambridge/agentready/issues/82)) ([ambient-code#95](https://github.com/chambridge/agentready/issues/95)) ([d87a280](d87a280))
* Implement SeparationOfConcernsAssessor (fixes [ambient-code#78](https://github.com/chambridge/agentready/issues/78)) ([ambient-code#92](https://github.com/chambridge/agentready/issues/92)) ([99bfe28](99bfe28))
* Implement StructuredLoggingAssessor (fixes [ambient-code#79](https://github.com/chambridge/agentready/issues/79)) ([ambient-code#96](https://github.com/chambridge/agentready/issues/96)) ([2b87ca7](2b87ca7))
* Phase 1 Task 1 - Consolidate Security Validation Patterns ([ambient-code#129](https://github.com/chambridge/agentready/issues/129)) ([8580c45](8580c45)), closes [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122)
* Phase 1 Tasks 2-3 - Consolidate Reporter Base & Assessor Factory ([ambient-code#131](https://github.com/chambridge/agentready/issues/131)) ([8e12bf9](8e12bf9)), closes [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122) [ambient-code#122](https://github.com/chambridge/agentready/issues/122)
* Phase 2 Task 4 - Replace manual config validation with Pydantic ([ambient-code#134](https://github.com/chambridge/agentready/issues/134)) ([d83cf58](d83cf58))
* Redesign homepage features with two-column layout and research links ([ambient-code#189](https://github.com/chambridge/agentready/issues/189)) ([570087d](570087d)), closes [ambient-code#187](https://github.com/chambridge/agentready/issues/187)
* redesign HTML report with dark theme and larger fonts ([ambient-code#39](https://github.com/chambridge/agentready/issues/39)) ([59f6702](59f6702)), closes [#8b5cf6](https://github.com/chambridge/agentready/issues/8b5cf6) [#XX](https://github.com/chambridge/agentready/issues/XX)
* Rename 'learn' command to 'extract-skills' for clarity ([ambient-code#125](https://github.com/chambridge/agentready/issues/125)) ([64d6563](64d6563)), closes [hi#scoring](https://github.com/hi/issues/scoring) [ambient-code#123](https://github.com/chambridge/agentready/issues/123)
* replace markdown-link-check with lychee for link validation ([ambient-code#177](https://github.com/chambridge/agentready/issues/177)) ([f1a4545](f1a4545))
* Standardize on Python 3.12+ with forward compatibility for 3.13 ([ambient-code#132](https://github.com/chambridge/agentready/issues/132)) ([84f2c46](84f2c46))
* Terminal-Bench eval harness (MVP Phase 1) ([ambient-code#178](https://github.com/chambridge/agentready/issues/178)) ([d06bab4](d06bab4)), closes [ambient-code#171](https://github.com/chambridge/agentready/issues/171)
* **workflows:** add comment posting for [@agentready-dev](https://github.com/agentready-dev) agent ([5dff614](5dff614))

### Performance Improvements

* implement lazy loading for heavy CLI commands ([ambient-code#151](https://github.com/chambridge/agentready/issues/151)) ([6a7cd4e](6a7cd4e))

### BREAKING CHANGES

* Users must update scripts from 'agentready learn'
to 'agentready extract-skills'. All flags and options remain identical.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant