An AI-powered GitHub Action that analyzes, evaluates, and (optionally) refactors GitHub issues into high-quality user stories. It uses Azure OpenAI (via Semantic Kernel) to provide structured feedback, suggest existing repository labels, and generate an improved story only when appropriate.
| Feature | What It Does | Why It Matters |
|---|---|---|
| Structured AI Evaluation | Produces a concise summary plus completeness flags (title, description, acceptance criteria), importance, and readiness status. | Quickly surfaces gaps so authors know exactly what to improve—reduces reviewer back‑and‑forth. |
| Context-Aware Refactored Story | Generates an improved title, description, and acceptance criteria only when the original is clear but incomplete. | Avoids noisy rewrites; delivers value-added refinement without overwriting already good work. |
| Existing Label Suggestions | Suggests up to 3 labels strictly from the repository's current label set. | Ensures consistency with existing taxonomy; prevents accidental creation of near-duplicate labels. |
| Deterministic Markdown Round‑Trip | Parses its own prior comments and regenerates output safely. | Allows iterative /review cycles without formatting drift or duplicated sections. |
Command Workflow (/review, /apply, /usage, /disable) |
Users drive when to re-evaluate, apply changes, view help, or pause automation. | Transparent, reversible control keeps humans in charge of changes to issues. |
| Safe Disable Mechanism | Embeds a hidden HTML marker to stop future automatic evaluations. | Lightweight opt-out per issue with an easy path to re-enable by removing the marker. |
Evaluation Harness (evaluations/) |
Optional script to score agent output (task adherence, intent resolution, completeness). | Enables data-driven tuning and regression checks for prompt / logic changes. |
| Minimal Required Inputs | Leverages GITHUB_REPOSITORY env and only requires comment ID for comment events. |
Reduces configuration errors and speeds adoption. |
| Immutable Config Layer | Centralized, read-only configuration objects (config.py). |
Lowers risk of side-effects and simplifies reasoning about runtime behavior. |
| Clear Refactor Conditions | Skips refactored story if issue is already “ready” or unclear. | Prevents misleading auto-generated content and focuses effort where it yields ROI. |
TL;DR: The agent elevates raw issue text into implementable user stories with targeted, non-intrusive assistance—accelerating grooming while preserving maintainers' control.
The action runs inside a Docker container. Azure OpenAI credentials are required for both issues and issue_comment events.
The action automatically uses GitHub's built-in environment variables:
name: AI Issue Enhancer
on:
issues:
types: [opened, edited]
issue_comment:
types: [created]
jobs:
enhance:
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Run Repo Agent
uses: mattdot/repoagent@v1
with:
azure_openai_api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
azure_openai_target_uri: ${{ secrets.AZURE_OPENAI_TARGET_URI }}pip install -r requirements.txtRun individual modules (e.g. parsing) as needed from src/.
Issue event simulation using GitHub environment variables:
docker build -t repoagent .
# Create a GitHub event payload file
cat > /tmp/event.json << 'EOF'
{
"issue": {
"number": 123
}
}
EOF
docker run --rm \
-e GITHUB_EVENT_NAME=issues \
-e GITHUB_EVENT_PATH=/tmp/event.json \
-e GITHUB_TOKEN=ghp_xxx \
-e GITHUB_REPOSITORY=owner/repo \
-e INPUT_AZURE_OPENAI_API_KEY=xxx \
-e INPUT_AZURE_OPENAI_TARGET_URI=https://<resource>.openai.azure.com/openai/deployments/<deployment>/chat/completions?api-version=2024-XX-XX \
-v /tmp/event.json:/tmp/event.json:ro \
repoagentIssue comment event simulation (with comment id):
cat > /tmp/event.json << 'EOF'
{
"issue": {
"number": 123
},
"comment": {
"id": 456
}
}
EOF
docker run --rm \
-e GITHUB_EVENT_NAME=issue_comment \
-e GITHUB_EVENT_PATH=/tmp/event.json \
-e GITHUB_TOKEN=ghp_xxx \
-e GITHUB_REPOSITORY=owner/repo \
-e INPUT_AZURE_OPENAI_API_KEY=xxx \
-e INPUT_AZURE_OPENAI_TARGET_URI=https://<resource>.openai.azure.com/... \
-v /tmp/event.json:/tmp/event.json:ro \
repoagentsrc/main.py– Entry point, event flowsrc/github_utils.py– GitHub API helpers & disable marker logicsrc/openai_utils.py– Azure OpenAI / Semantic Kernel orchestrationsrc/response_models.py– JSON ↔ markdown parsing & serializationsrc/prompts.py– Prompt constructionevaluations/– Offline evaluation harness (Azure AI evaluators)action.yml– Action metadata / inputsrequirements.txt– Runtime dependencies
All action inputs map to environment variables INPUT_<NAME> automatically. GitHub context is automatically extracted from standard GitHub environment variables. Azure OpenAI credentials are required for both supported events.
| Input Name | Required | Default | Events | Description |
|---|---|---|---|---|
github_token |
No | ${{ github.token }} |
all | Token with issues:write permission |
azure_openai_api_key |
Yes | - | all | Azure OpenAI API key |
azure_openai_target_uri |
Yes | - | all | Full chat completions endpoint URL |
Notes:
- GitHub context (
event_name,issue_id,issue_comment_id,repository) is automatically extracted from GitHub's standard environment variables (GITHUB_EVENT_NAME,GITHUB_EVENT_PATH,GITHUB_TOKEN,GITHUB_REPOSITORY). - These values are not configurable through action inputs - they are always read from the GitHub Actions environment.
A refactored_story section is included only when:
ready_to_workis false ANDbase_story_not_clearis false
If the story is already good (ready_to_work = true) or unclear (base_story_not_clear = true), no refactored story is generated (an explanatory note is shown for the unclear case).
issues– Evaluate newly opened / edited issues.issue_comment– Execute commands (/apply,/review, etc.).
| Command | Purpose | Notes |
|---|---|---|
/review |
Run (or re-run) an evaluation and post results. | Use iteratively as issue evolves. |
/apply |
Apply latest AI-refactored title/body/labels. | Only works if a prior evaluation comment exists. |
/usage |
Show help / command list. | Always available. |
/disable |
Stop automatic AI evaluations for this issue. | Adds a hidden HTML marker. Remove the marker & comment /review to re-enable. |
Disable marker used: <!-- agent:disabled -->.
The evaluations/ folder provides evaluator.py leveraging Azure AI Foundry evaluators (Task Adherence, Intent Resolution, Response Completeness). See evaluations/README.md for environment variable requirements and execution steps:
cd evaluations
python evaluator.pyGrant the workflow only the permissions it needs (minimum: issues: write, contents: read). Secrets required: Azure OpenAI key and target URI.
See CONTRIBUTING.md for guidelines.
MIT – see LICENSE.