Skip to content

Conversation

@jeremyeder
Copy link
Contributor

Summary

Implements a community-driven leaderboard where users can submit their AgentReady assessment results via CLI. Submissions are validated through GitHub Actions and displayed on the existing GitHub Pages site.

Features

Self-Service Submission: agentready submit command creates PR automatically
Anti-Gaming Validation: Ownership verification + re-assessment
Multiple Views: Overall, by-language, by-size, most-improved
Historical Tracking: Multiple submissions per repo show improvement over time
Integrated with Docs: Leaderboard pages added to existing Jekyll site

Components

  1. CLI Submit Command (src/agentready/cli/submit.py)

    • Uses PyGithub to create PR automatically
    • Verifies user has commit access to submitted repo
    • Generates unique timestamp-based filenames
  2. Validation Workflow (.github/workflows/validate-leaderboard-submission.yml)

    • Re-runs assessment on submitted repository
    • Compares claimed vs actual score (±2 point tolerance)
    • Checks repository is public and submitter has access
    • Comments on PR with validation results
  3. Aggregation Script (scripts/generate-leaderboard-data.py)

    • Scans submissions/ directory
    • Generates docs/_data/leaderboard.json
    • Calculates rankings by score, language, size
    • Identifies "most improved" repositories
  4. Leaderboard Pages (docs/leaderboard/)

    • Main leaderboard with top 10 cards + full table
    • By-language rankings
    • Most-improved tracking
    • Tier-based color coding (Platinum/Gold/Silver/Bronze)

Implementation Plan

See complete specification: specs/leaderboard-feature-spec.md

Phase 1: CLI Submit Command (foundation)
Phase 2: Validation Workflow (anti-gaming)
Phase 3: Aggregation Script (data pipeline)
Phase 4: Leaderboard Pages (UI)

Example Usage

# Developer workflow
cd ~/my-awesome-project

# 1. Run assessment
agentready assess .

# 2. Submit to leaderboard
export GITHUB_TOKEN=ghp_xxxxx
agentready submit

# 3. Wait for validation → merge → leaderboard updates

Design Decisions

  • ISO 8601 Timestamps: 2025-12-03T14-30-45-assessment.json (unique, sortable)
  • Submission Path: submissions/{org}/{repo}/{timestamp}-assessment.json
  • Score Tolerance: ±2 points (accounts for minor variations)
  • Rate Limiting: 1 submission per repo per 24 hours
  • Static Site Generation: Leverages existing Jekyll/GitHub Pages

Security & Anti-Gaming

  • Ownership verification (submitter must have commit access)
  • Re-assessment on validation (never trust submitted scores)
  • Public repo requirement (transparent)
  • Sandboxed assessment runs (isolated /tmp directory)
  • Rate limiting per repository

Future Enhancements (Out of Scope)

  • Badges for README ([![AgentReady Score](https://img.shields.io/badge/...)](...))
  • API endpoint for programmatic access
  • Historical charts (score trends over time)
  • Filters/search on leaderboard UI

Ready for Review: The spec is complete and implementation-ready. This PR will be updated with actual implementation commits following the 4-phase plan.

🤖 Generated with Claude Code

Complete cold-start implementation guide for community leaderboard:
- CLI submit command (agentready submit)
- GitHub Action validation workflow
- Leaderboard aggregation script
- Jekyll pages integrated with existing docs

Enables self-service repository submissions with anti-gaming measures.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: c443dea5
Assessed: December 03, 2025 at 9:44 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 137 files
  • Markdown: 99 files
  • YAML: 21 files
  • JSON: 9 files
  • Shell: 6 files

Repository Stats

  • Total Files: 317
  • Total Lines: 175,959

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 449/1369
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 55

❌ File Size Limits

Measured: 2 huge, 8 large out of 137 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.5% of 137 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: c443dea
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

jeremyeder and others added 2 commits December 3, 2025 16:48
- Add PyGithub>=2.1.1 dependency for GitHub API integration
- Create submit.py with full PR creation workflow
- Validates GitHub token, assessment file, and repository access
- Generates unique ISO 8601 timestamp filenames
- Verifies submitter has commit access to repository
- Creates fork, branch, commits assessment, and opens PR
- Includes dry-run mode for testing
- Comprehensive error handling and user guidance

Submission workflow:
1. User runs: agentready submit
2. Command validates token and assessment
3. Verifies user has commit access to repo
4. Creates PR to agentready/agentready automatically
5. Validation workflow will run (next phase)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
**Phase 2**: Validation GitHub Action
- validate-leaderboard-submission.yml workflow
- Validates JSON schema, repo access, score accuracy (±2 tolerance)
- Re-runs assessment for verification
- Posts validation results as PR comments
- Secure: all user input via environment variables

**Phase 3**: Aggregation Script & Workflow
- scripts/generate-leaderboard-data.py for data generation
- Scans submissions/ directory, groups by repository
- Generates docs/_data/leaderboard.json for Jekyll
- Calculates overall, by-language, by-size, most-improved rankings
- update-leaderboard.yml workflow triggers on submissions merge

**Phase 4**: Jekyll Leaderboard Pages
- docs/leaderboard/index.md with top 10 cards + full table
- Tier-based color coding (Platinum/Gold/Silver/Bronze)
- Responsive CSS styling in docs/assets/css/leaderboard.css
- Added to navigation in _config.yml
- Empty leaderboard.json for initial build

**Code Quality**:
- All files formatted with black
- Imports sorted with isort
- Ruff linting passed
- Security: no command injection vulnerabilities in workflows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

⚠️ Broken links found in documentation. See workflow logs for details.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: e2505626
Assessed: December 03, 2025 at 9:53 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,255

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: e250562
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

- Replace template literals with string concatenation in github-script
- Add missing newlines at end of workflow files
- Fix ruff E402 error in regenerate_heatmap.py (import after sys.path)
- Ensures pre-commit check-yaml passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

⚠️ Broken links found in documentation. See workflow logs for details.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 5f5d218a
Assessed: December 03, 2025 at 9:56 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,282

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: 5f5d218
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

jeremyeder and others added 2 commits December 3, 2025 16:57
Add regex pattern to ignore {{ variable }} syntax in markdown-link-check.
This prevents false positives on Jekyll/Liquid template variables like
{{ entry.url }} in leaderboard pages.

Fixes docs-lint workflow failure on PR #146.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Validates links in docs/ markdown files locally before push
- Uses same .markdown-link-check.json config as CI workflow
- Prevents broken link issues from reaching CI

Now runs on every commit that modifies docs/*.md files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: ea9d4504
Assessed: December 03, 2025 at 9:58 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,285

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: ea9d450
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 4791b200
Assessed: December 03, 2025 at 9:59 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,291

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: 4791b20
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

Track which ruleset version was used for each assessment to ensure
fair comparisons. Scores are only directly comparable when assessed
with the same research version, as attributes and weights may change.

**Changes**:

1. **Assessment Model**:
   - Add `research_version` field to AssessmentMetadata
   - Scanner loads version from ResearchLoader during assessment
   - Captured in every assessment JSON file

2. **Leaderboard Data**:
   - Aggregation script extracts `research_version` from submissions
   - Includes version in history for tracking changes over time
   - Added to leaderboard.json for Jekyll display

3. **Leaderboard Display**:
   - New "Ruleset" column in leaderboard table
   - Shows research version for each submission
   - Helps users understand scoring context

4. **Validation Workflow**:
   - Extracts research version from submitted assessment
   - Compares claimed vs actual research version
   - Warns if versions differ (scores not directly comparable)
   - PR comment includes version info and mismatch warnings

**Why This Matters**:
- Research versions may add/remove/reweight attributes
- Comparing scores across versions can be misleading
- Users can now see exactly which ruleset was used
- Historical tracking shows how repositories improve

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 9342896d
Assessed: December 03, 2025 at 10:05 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.4s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,346

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • Tool Version: AgentReady v1.0.0
  • Research Report: Bundled version
  • Repository Snapshot: 9342896
  • Assessment Duration: 1.4s

🤖 Generated with Claude Code

Both HTML and Markdown reporters were displaying hardcoded 'v1.0.0'
instead of actual AgentReady and research versions from metadata.

**Changes**:
- Markdown footer: Use metadata.agentready_version and metadata.research_version
- HTML footer: Same + add assessed_by and assessment_timestamp_human
- Now shows accurate version info for reproducibility

**Before**:
- Tool Version: AgentReady v1.0.0
- Research Report: Bundled version

**After**:
- AgentReady Version: v2.8.1
- Research Version: v1.2.0
- Assessed By: jeder@hostname
- Assessment Date: December 3, 2025 at 2:30 PM

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 1a4db542
Assessed: December 03, 2025 at 10:07 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.7s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,351

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1184 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • AgentReady Version: v2.8.1
  • Research Version: v1.0.0
  • Repository Snapshot: 1a4db54
  • Assessment Duration: 1.7s
  • Assessed By: runner@runnervmoqczp
  • Assessment Date: December 03, 2025 at 10:07 PM

🤖 Generated with Claude Code

- Add research_version parameter to all AssessmentMetadata.create() calls in tests
- Add graceful fallback for None metadata in Markdown reporter footer
- Add conditional check for None metadata in HTML template footer
- Fixes test failures from metadata signature change
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🤖 AgentReady Assessment Report

Repository: agentready
Path: /home/runner/work/agentready/agentready
Branch: HEAD | Commit: 364b5e2c
Assessed: December 03, 2025 at 10:18 PM
AgentReady Version: 2.8.1
Run by: runner@runnervmoqczp


📊 Summary

Metric Value
Overall Score 80.9/100
Certification Level Gold
Attributes Assessed 20/30
Attributes Not Assessed 10
Assessment Duration 1.3s

Languages Detected

  • Python: 139 files
  • Markdown: 103 files
  • YAML: 23 files
  • JSON: 10 files
  • Shell: 6 files

Repository Stats

  • Total Files: 328
  • Total Lines: 178,372

🎖️ Certification Ladder

  • 💎 Platinum (90-100)
  • 🥇 Gold (75-89) → YOUR LEVEL ←
  • 🥈 Silver (60-74)
  • 🥉 Bronze (40-59)
  • ⚠️ Needs Improvement (0-39)

📋 Detailed Findings

API Documentation

Attribute Tier Status Score
OpenAPI/Swagger Specifications T3 ⊘ not_applicable

Build & Development

Attribute Tier Status Score
One-Command Build/Setup T2 ✅ pass 100
Container/Virtualization Setup T4 ⊘ not_applicable

Code Organization

Attribute Tier Status Score
Separation of Concerns T2 ✅ pass 98

Code Quality

Attribute Tier Status Score
Type Annotations T1 ❌ fail 41
Cyclomatic Complexity Thresholds T3 ✅ pass 100
Semantic Naming T3 ✅ pass 100
Structured Logging T3 ❌ fail 0
Code Smell Elimination T4 ⊘ not_applicable

❌ Type Annotations

Measured: 32.8% (Threshold: ≥80%)

Evidence:

  • Typed functions: 451/1373
  • Coverage: 32.8%
📝 Remediation Steps

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

❌ Structured Logging

Measured: not configured (Threshold: structured logging library)

Evidence:

  • No structured logging library found
  • Checked files: pyproject.toml
  • Using built-in logging module (unstructured)
📝 Remediation Steps

Add structured logging library for machine-parseable logs

  1. Choose structured logging library (structlog for Python, winston for Node.js)
  2. Install library and configure JSON formatter
  3. Add standard fields: timestamp, level, message, context
  4. Include request context: request_id, user_id, session_id
  5. Use consistent field naming (snake_case for Python)
  6. Never log sensitive data (passwords, tokens, PII)
  7. Configure different formats for dev (pretty) and prod (JSON)

Commands:

# Install structlog
pip install structlog

# Configure structlog
# See examples for configuration

Examples:

# Python with structlog
import structlog

# Configure structlog
structlog.configure(
    processors=[
        structlog.stdlib.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

# Good: Structured logging
logger.info(
    "user_login",
    user_id="123",
    email="user@example.com",
    ip_address="192.168.1.1"
)

# Bad: Unstructured logging
logger.info(f"User {user_id} logged in from {ip}")

Context Window Optimization

Attribute Tier Status Score
CLAUDE.md Configuration Files T1 ✅ pass 100
File Size Limits T2 ❌ fail 56

❌ File Size Limits

Measured: 2 huge, 8 large out of 139 (Threshold: <5% files >500 lines, 0 files >1000 lines)

Evidence:

  • Found 2 files >1000 lines (1.4% of 139 files)
  • Largest: tests/unit/test_models.py (1192 lines)
📝 Remediation Steps

Refactor large files into smaller, focused modules

  1. Identify files >1000 lines
  2. Split into logical submodules
  3. Extract classes/functions into separate files
  4. Maintain single responsibility principle

Examples:

# Split large file:
# models.py (1500 lines) → models/user.py, models/product.py, models/order.py

Dependency Management

Attribute Tier Status Score
Lock Files for Reproducibility T1 ✅ pass 100
Dependency Freshness & Security T2 ⊘ not_applicable

Documentation

Attribute Tier Status Score
Concise Documentation T2 ❌ fail 70
Inline Documentation T2 ✅ pass 100

❌ Concise Documentation

Measured: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 276 lines (excellent)
  • Heading density: 14.5 per 100 lines (target: 3-5)
  • 1 paragraphs exceed 10 lines (walls of text)
📝 Remediation Steps

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

### Documentation Standards

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| README Structure | T1 | ✅ pass | 100 |
| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 |
| Architecture Decision Records | T3 | ⊘ not_applicable | — |

#### ❌ Architecture Decision Records (ADRs)

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

<details><summary><strong>📝 Remediation Steps</strong></summary>


Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:

```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

### Git & Version Control

| Attribute | Tier | Status | Score |
|-----------|------|--------|-------|
| Conventional Commit Messages | T2 | ❌ fail | 0 |
| .gitignore Completeness | T2 | ✅ pass | 100 |
| Branch Protection Rules | T4 | ⊘ not_applicable | — |
| Issue & Pull Request Templates | T4 | ⊘ not_applicable | — |

#### ❌ Conventional Commit Messages

**Measured**: not configured (Threshold: configured)

**Evidence**:
- No commitlint or husky configuration

<details><summary><strong>📝 Remediation Steps</strong></summary>


Configure conventional commits with commitlint

1. Install commitlint
2. Configure husky for commit-msg hook

**Commands**:

```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

Performance

Attribute Tier Status Score
Performance Benchmarks T4 ⊘ not_applicable

Repository Structure

Attribute Tier Status Score
Standard Project Layouts T1 ✅ pass 100
Issue & Pull Request Templates T3 ✅ pass 100
Separation of Concerns T2 ⊘ not_applicable

Security

Attribute Tier Status Score
Security Scanning Automation T4 ⊘ not_applicable

Testing & CI/CD

Attribute Tier Status Score
Test Coverage Requirements T2 ✅ pass 100
Pre-commit Hooks & CI/CD Linting T2 ✅ pass 100
CI/CD Pipeline Visibility T3 ✅ pass 80

🎯 Next Steps

Priority Improvements (highest impact first):

  1. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  2. Conventional Commit Messages (Tier 2) - +3.0 points potential
    • Configure conventional commits with commitlint
  3. File Size Limits (Tier 2) - +3.0 points potential
    • Refactor large files into smaller, focused modules
  4. Concise Documentation (Tier 2) - +3.0 points potential
    • Make documentation more concise and structured
  5. Architecture Decision Records (ADRs) (Tier 3) - +1.5 points potential
    • Create Architecture Decision Records (ADRs) directory and document key decisions

📝 Assessment Metadata

  • AgentReady Version: v2.8.1
  • Research Version: v1.0.0
  • Repository Snapshot: 364b5e2
  • Assessment Duration: 1.3s
  • Assessed By: runner@runnervmoqczp
  • Assessment Date: December 03, 2025 at 10:18 PM

🤖 Generated with Claude Code

@jeremyeder jeremyeder merged commit fea0b3e into main Dec 3, 2025
9 of 11 checks passed
github-actions bot pushed a commit that referenced this pull request Dec 3, 2025
# [2.9.0](v2.8.1...v2.9.0) (2025-12-03)

### Features

* Community Leaderboard for AgentReady Scores ([#146](#146)) ([fea0b3e](fea0b3e))
@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

🎉 This PR is included in version 2.9.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants