Skip to content

Conversation

@jeremyeder
Copy link
Contributor

Summary

  • Add prominent report header with repository metadata to all report formats (HTML, Markdown, JSON)
  • Addresses P0 requirement: users can now immediately identify what repository was assessed
  • Critical for multi-repository workflows and CI/CD integration

Changes

  • New AssessmentMetadata model to capture execution context:
    • AgentReady version (from package metadata)
    • Assessment timestamp (ISO 8601 + human-readable)
    • Executed by (username@hostname)
    • CLI command used
    • Working directory
  • Updated Assessment model with optional metadata field
  • Scanner service now collects metadata automatically
  • All reporters updated to display metadata:
    • HTML: Two-column header (repo info + execution metadata)
    • Markdown: Prominent header with all metadata fields
    • JSON: Metadata object at top level
  • Comprehensive testing: 4 new unit tests, all 37 tests passing

Example Output

Markdown Report Header

# 🤖 AgentReady Assessment Report

**Repository**: agentready
**Path**: `/Users/jeder/repos/agentready`
**Branch**: `main` | **Commit**: `90b74b8b`
**Assessed**: November 21, 2025 at 3:12 PM
**AgentReady Version**: 1.2.0
**Run by**: jeder@macbook

---

JSON Metadata

{
  "metadata": {
    "agentready_version": "1.2.0",
    "assessment_timestamp": "2025-11-21T15:12:02.453762",
    "assessment_timestamp_human": "November 21, 2025 at 3:12 PM",
    "executed_by": "jeder@macbook",
    "command": "agentready assess . --verbose",
    "working_directory": "/Users/jeder/repos/agentready"
  },
  "repository": { ... }
}

Test Results

  • ✅ All 37 tests passing (34 unit + 3 integration)
  • ✅ All linters passing (black, isort, ruff)
  • ✅ End-to-end assessment tested successfully

Acceptance Criteria Met

  • ✅ User can immediately identify what repository was assessed
  • ✅ Timestamp shows when assessment was run
  • ✅ Git context (branch, commit) visible in all reports
  • ✅ AgentReady version tracked for reproducibility
  • ✅ Execution context captured for debugging and auditing

🤖 Generated with Claude Code

Add prominent report header showing repository context and assessment metadata to all report formats (HTML, Markdown, JSON).

Changes:
- Create AssessmentMetadata model to capture execution context
  - AgentReady version from package metadata
  - Assessment timestamp (ISO 8601 and human-readable)
  - Executed by (username@hostname)
  - CLI command used
  - Working directory
- Update Assessment model with optional metadata field
- Implement metadata collection in Scanner service
  - Get version from importlib.metadata
  - Reconstruct command from sys.argv
  - Capture user and hostname from environment
- Update all reporters to display metadata
  - HTML: Two-column header (repo info + meta info)
  - Markdown: Prominent header with all metadata fields
  - JSON: Metadata object at top level
- Add comprehensive unit tests (4 new tests, all passing)
- All 37 tests passing (34 unit + 3 integration)

Acceptance criteria met:
✅ User can identify repository assessed (name, path, branch, commit)
✅ Timestamp shows when assessment was run
✅ Git context visible in all reports
✅ AgentReady version tracked for reproducibility
✅ Execution context captured (user@host, command, cwd)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 35 to +38
from .bootstrap import bootstrap
from .demo import demo
from .learn import learn
from .repomix import repomix_generate

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove import of nonexistent repomix CLI command

The CLI module now imports repomix_generate from .repomix, but there is no src/agentready/cli/repomix.py in the package. Because the import is executed at module load time, any agentready invocation now raises ModuleNotFoundError: No module named 'agentready.cli.repomix' before parsing arguments, effectively breaking the CLI. Unless a repomix command is added, this import should be dropped or gated.

Useful? React with 👍 / 👎.

Comment on lines +71 to +73
# Format timestamps
iso_timestamp = timestamp.isoformat()
human_timestamp = timestamp.strftime("%B %d, %Y at %-I:%M %p")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use portable time format when building metadata

AssessmentMetadata formats the human-readable timestamp with timestamp.strftime("%B %d, %Y at %-I:%M %p"). The %-I flag is POSIX-only; on Windows strftime raises ValueError: Invalid format string, so every assessment on Windows will crash when metadata is created (the scanner doesn’t guard this call). Use a portable directive (e.g., %I with lstrip('0')) or conditionally choose the format to keep Windows users working.

Useful? React with 👍 / 👎.

@jeremyeder jeremyeder merged commit 7a8b34a into main Nov 21, 2025
3 of 6 checks passed
github-actions bot pushed a commit that referenced this pull request Nov 21, 2025
# [1.3.0](v1.2.0...v1.3.0) (2025-11-21)

### Features

* add report header with repository metadata ([#28](#28)) ([7a8b34a](7a8b34a))
@github-actions
Copy link
Contributor

🎉 This PR is included in version 1.3.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

jeremyeder added a commit that referenced this pull request Nov 21, 2025
Add prominent report header showing repository context and assessment metadata to all report formats (HTML, Markdown, JSON).

Changes:
- Create AssessmentMetadata model to capture execution context
  - AgentReady version from package metadata
  - Assessment timestamp (ISO 8601 and human-readable)
  - Executed by (username@hostname)
  - CLI command used
  - Working directory
- Update Assessment model with optional metadata field
- Implement metadata collection in Scanner service
  - Get version from importlib.metadata
  - Reconstruct command from sys.argv
  - Capture user and hostname from environment
- Update all reporters to display metadata
  - HTML: Two-column header (repo info + meta info)
  - Markdown: Prominent header with all metadata fields
  - JSON: Metadata object at top level
- Add comprehensive unit tests (4 new tests, all passing)
- All 37 tests passing (34 unit + 3 integration)

Acceptance criteria met:
✅ User can identify repository assessed (name, path, branch, commit)
✅ Timestamp shows when assessment was run
✅ Git context visible in all reports
✅ AgentReady version tracked for reproducibility
✅ Execution context captured (user@host, command, cwd)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
jeremyeder pushed a commit that referenced this pull request Nov 21, 2025
# [1.3.0](v1.2.0...v1.3.0) (2025-11-21)

### Features

* add report header with repository metadata ([#28](#28)) ([7a8b34a](7a8b34a))
jeremyeder added a commit that referenced this pull request Nov 21, 2025
…ration (#29)

* feat: Add Repomix integration for AI-friendly repository context generation

Implements comprehensive Repomix integration as specified in coldstart prompt
008-repomix-integration.md. This feature enables automated repository context
generation for AI consumption and improves AI-assisted development workflows.

## Features Added

### Core Components
- **RepomixService** (`src/agentready/services/repomix.py`)
  - Configuration generation (repomix.config.json)
  - Ignore file generation (.repomixignore)
  - Repomix execution wrapper with error handling
  - Freshness checking (7-day default staleness threshold)
  - Output file management and discovery

- **CLI Command** (`src/agentready/cli/repomix.py`)
  - `agentready repomix-generate` - Main command
  - `--init` - Initialize Repomix configuration
  - `--format` - Output format selection (markdown/xml/json/plain)
  - `--check` - Verify output freshness without regeneration
  - `--max-age` - Configurable staleness threshold

- **Bootstrap Integration** (`src/agentready/cli/bootstrap.py`)
  - Added `--repomix` flag to bootstrap command
  - Auto-generates repomix.config.json and .repomixignore
  - Creates GitHub Actions workflow for automation

- **GitHub Actions Workflow** (`src/agentready/templates/bootstrap/workflows/repomix-update.yml.j2`)
  - Auto-regenerates on push to main and PRs
  - Weekly scheduled runs (Mondays 9 AM UTC)
  - Manual trigger support via workflow_dispatch
  - PR comments when Repomix output changes

- **Repomix Assessor** (`src/agentready/assessors/repomix.py`)
  - Tier 3 attribute (weight: 0.02)
  - Checks for configuration file existence
  - Validates output freshness (< 7 days)
  - Provides detailed remediation guidance

### Testing
- Comprehensive unit tests (21 test cases)
- RepomixService tests with mocking
  - Installation detection
  - Config/ignore generation
  - Freshness checks
  - Command execution
- RepomixConfigAssessor tests
  - Multiple assessment scenarios
  - Pass/fail/partial compliance

### Documentation
- Updated repomix-output.md (1.8M, 420k tokens, 156 files)
- AgentReady self-assessment: **80.0/100 (Gold)** 🥇

## Technical Details

### Architecture
- Follows existing AgentReady patterns
- Strategy pattern for assessor
- Service layer for business logic
- Template-based workflow generation

### Integration Points
- Registered in main CLI (`src/agentready/cli/main.py`)
- Added to bootstrap generator (`src/agentready/services/bootstrap.py`)
- Included in assessor list (Tier 3 Important)

### Configuration Management
- Smart defaults for Python projects
- Customizable ignore patterns
- Aligned with existing .gitignore patterns
- Security scanning enabled by default

## Use Cases

```bash
# Initialize Repomix for repository
agentready repomix-generate --init

# Generate AI-friendly context
agentready repomix-generate

# Bootstrap new repo with Repomix
agentready bootstrap --repomix

# Check if output is fresh
agentready repomix-generate --check
```

## Related
- Coldstart Prompt: `coldstart-prompts/08-repomix-integration.md`
- Priority: P4 (Enhancement)
- Category: AI-Assisted Development Tools

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore(release): 1.1.2 [skip ci]

## [1.1.2](v1.1.1...v1.1.2) (2025-11-21)

### Bug Fixes

* correct GitHub repository link in site navigation ([5492278](5492278))

* feat: Add automated demo command for AgentReady (#24)

* feat: Add automated demo command for AgentReady

Implements P0 feature to showcase AgentReady capabilities with a single command.

Features:
- agentready demo command creates sample repository and runs assessment
- Supports Python (default) and JavaScript demo repositories
- Real-time progress indicators with color-coded output (✓/✗/⊘)
- Auto-opens HTML report in browser (optional --no-browser flag)
- Generates HTML, Markdown, and JSON reports in .agentready-demo/
- Options: --language, --no-browser, --keep-repo

Implementation:
- Created src/agentready/cli/demo.py with demo command logic
- Registered demo command in CLI main.py
- Added comprehensive unit tests in tests/unit/test_demo.py
- Demo completes in ~3-5 seconds with 25 attribute assessments

Perfect for presentations, stakeholder demos, and user onboarding.

Closes #1

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Jeremy Eder <jeremyeder@users.noreply.github.com>

* fix: install AgentReady from source in CI workflow

The workflow was attempting to install agentready from PyPI, but the
package has never been published. Changed to install from checked-out
repository source using 'pip install -e .' instead.

This ensures CI tests the actual code in the PR rather than a
potentially stale PyPI version.

Fixes GitHub Actions workflow failure on PR #24.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: add manual PyPI publishing workflow

Added workflow_dispatch workflow for manual PyPI publishing with:
- Dry run mode (publishes to TestPyPI for testing)
- Production mode (publishes to PyPI)
- Optional version override
- Automatic GitHub release creation
- Built-in validation with twine check
- Secure: Uses environment variables to prevent command injection

Usage:
1. Configure TEST_PYPI_TOKEN and PYPI_TOKEN secrets
2. Go to Actions → "Publish to PyPI" → Run workflow
3. Choose dry run (TestPyPI) or production (PyPI)

Future enhancement tracked in issue #25 for automated integration
with semantic-release workflow.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: implement continuous learning loop with LLM-powered skill extraction

Add comprehensive learning system that extracts high-quality skills from assessments:
- New 'learn' CLI command with heuristic and LLM enrichment modes
- Claude API integration for detailed skill analysis and instruction generation
- LLM response caching system (7-day TTL) to reduce API costs
- Code sampling for real repository examples
- Pattern extraction from high-scoring assessment attributes
- Support for multiple output formats (JSON, SKILL.md, GitHub issues)

Technical additions:
- src/agentready/cli/learn.py - Main learning command implementation
- src/agentready/learners/ - Pattern extraction and LLM enrichment modules
- src/agentready/services/llm_cache.py - LLM response caching
- src/agentready/models/discovered_skill.py - Skill data model
- tests/unit/learners/ - Unit tests for learning modules
- .github/workflows/continuous-learning.yml - CI workflow
- .github/CLAUDE_INTEGRATION.md - Integration documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Jeremy Eder <jeremyeder@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>

* chore(release): 1.2.0 [skip ci]

# [1.2.0](v1.1.2...v1.2.0) (2025-11-21)

### Features

* Add automated demo command for AgentReady ([#24](#24)) ([f4e89d9](f4e89d9)), closes [#1](#1) [#25](#25) [hi#quality](https://github.com/hi/issues/quality) [hi#scoring](https://github.com/hi/issues/scoring)

* test(bootstrap): add comprehensive test coverage for bootstrap feature (#26)

Add 32 tests (19 unit, 13 integration) for bootstrap functionality:
- BootstrapGenerator service: 100% coverage (101/101 lines)
- Bootstrap CLI: 92% coverage (37/40 lines)
- Tests cover dry-run mode, file generation, language detection,
  template rendering, error handling, and edge cases

Also fix ambiguous variable name linting error in code_quality.py
(line 300: l → line for better readability)

All tests passing, all linters clean.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>

* feat: add report header with repository metadata (#28)

Add prominent report header showing repository context and assessment metadata to all report formats (HTML, Markdown, JSON).

Changes:
- Create AssessmentMetadata model to capture execution context
  - AgentReady version from package metadata
  - Assessment timestamp (ISO 8601 and human-readable)
  - Executed by (username@hostname)
  - CLI command used
  - Working directory
- Update Assessment model with optional metadata field
- Implement metadata collection in Scanner service
  - Get version from importlib.metadata
  - Reconstruct command from sys.argv
  - Capture user and hostname from environment
- Update all reporters to display metadata
  - HTML: Two-column header (repo info + meta info)
  - Markdown: Prominent header with all metadata fields
  - JSON: Metadata object at top level
- Add comprehensive unit tests (4 new tests, all passing)
- All 37 tests passing (34 unit + 3 integration)

Acceptance criteria met:
✅ User can identify repository assessed (name, path, branch, commit)
✅ Timestamp shows when assessment was run
✅ Git context visible in all reports
✅ AgentReady version tracked for reproducibility
✅ Execution context captured (user@host, command, cwd)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>

* chore(release): 1.3.0 [skip ci]

# [1.3.0](v1.2.0...v1.3.0) (2025-11-21)

### Features

* add report header with repository metadata ([#28](#28)) ([7a8b34a](7a8b34a))

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: semantic-release-bot <semantic-release-bot@martynus.net>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Jeremy Eder <jeremyeder@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants