t1085.1: AI Supervisor context builder#1607
Conversation
Create supervisor/ai-context.sh that assembles a comprehensive project snapshot for the AI reasoning engine. Outputs structured markdown with: - Open GitHub issues (title, labels, age, comments, assignees) - Recent PRs (state, reviews, CI status) - TODO.md state (open, blocked, dispatchable tasks) - Supervisor DB state (running workers, recent completions) - Recent worker outcomes and evaluations - Pattern tracker summary (full scope) - Recent cross-session memories (full scope) - Queue health metrics (7d throughput, success rate, retries) Supports --scope full|quick for context depth control. Runs standalone for testing or sourced by supervisor-helper.sh. Adds 'ai-context' subcommand to supervisor CLI.
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers the foundational component for the Supervisor Intelligence Upgrade, specifically focusing on providing the AI reasoning engine with a rich, structured understanding of the project's current state. By aggregating data from various sources like GitHub, the project's TODO list, and internal database metrics, it enables the AI to make informed decisions regarding issue triage, task management, and overall project oversight. This initial phase establishes the crucial data pipeline for subsequent AI-driven functionalities. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
WalkthroughThis PR adds an AI context-building feature to the supervisor system. A new Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as supervisor-helper.sh
participant Builder as ai-context.sh<br/>build_ai_context()
participant Sections as Section Builders
participant GitHub as GitHub CLI<br/>(gh)
participant Files as DB/Files<br/>(TODO, DB, logs)
participant Helpers as External Helpers<br/>(memory, patterns)
User->>CLI: ai-context [--scope] [--repo]
CLI->>CLI: Resolve repo_path & scope
CLI->>Builder: build_ai_context(repo_path, scope)
Builder->>Sections: Invoke section builders
Sections->>GitHub: Fetch issues & PRs
GitHub-->>Sections: Issue/PR data
Sections->>Files: Parse TODO.md
Files-->>Sections: Task counts & listings
Sections->>Files: Query Supervisor DB
Files-->>Sections: Task status & workers
Sections->>Files: Read state_log
Files-->>Sections: Recent outcomes
alt Full Scope Only
Sections->>Helpers: Query pattern-tracker
Helpers-->>Sections: Pattern stats
Sections->>Helpers: Invoke memory-helper
Helpers-->>Sections: Memory entries
end
Sections->>Files: Aggregate queue metrics
Files-->>Sections: 7-day stats
Sections->>Builder: Return all sections
Builder->>Builder: Assemble markdown
Builder-->>CLI: Formatted context document
CLI-->>User: Output markdown to stdout
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Wed Feb 18 01:10:33 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new script, ai-context.sh, to assemble a comprehensive project snapshot for an AI reasoning engine, and integrates it into the existing supervisor-helper.sh. The new script is well-structured, but my review has identified several opportunities for improvement, primarily concerning performance and adherence to the repository's shell scripting style guide. Key feedback points include refactoring inefficient jq calls within loops (aligning with best practices for efficient data processing in shell scripts), addressing a style guide violation in local variable declarations, improving the efficiency of string building, and using more performant shell built-ins for string manipulation. The suggested changes will enhance the script's performance, maintainability, and consistency with project standards.
| local i=0 | ||
| while [[ $i -lt $issue_count ]]; do | ||
| local num title labels created comments assignee age_days | ||
| num=$(printf '%s' "$issues_json" | jq -r ".[$i].number" 2>/dev/null || echo "?") | ||
| title=$(printf '%s' "$issues_json" | jq -r ".[$i].title" 2>/dev/null || echo "?") | ||
| labels=$(printf '%s' "$issues_json" | jq -r "[.[$i].labels[].name] | join(\", \")" 2>/dev/null || echo "") | ||
| created=$(printf '%s' "$issues_json" | jq -r ".[$i].createdAt" 2>/dev/null || echo "") | ||
| comments=$(printf '%s' "$issues_json" | jq -r ".[$i].comments | length" 2>/dev/null || echo "0") | ||
| assignee=$(printf '%s' "$issues_json" | jq -r "[.[$i].assignees[].login] | join(\", \")" 2>/dev/null || echo "") | ||
|
|
||
| # Calculate age in days | ||
| if [[ -n "$created" && "$created" != "null" ]]; then | ||
| local created_epoch now_epoch | ||
| created_epoch=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$created" "+%s" 2>/dev/null || date -d "$created" "+%s" 2>/dev/null || echo 0) | ||
| now_epoch=$(date "+%s") | ||
| age_days=$(((now_epoch - created_epoch) / 86400)) | ||
| else | ||
| age_days="?" | ||
| fi | ||
|
|
||
| # Truncate title for table readability | ||
| if [[ ${#title} -gt 60 ]]; then | ||
| title="${title:0:57}..." | ||
| fi | ||
|
|
||
| output+="| #$num | $title | $labels | ${age_days}d | $comments | ${assignee:-none} |\n" | ||
| i=$((i + 1)) | ||
| done |
There was a problem hiding this comment.
Calling jq multiple times inside a while loop for each field of each issue is highly inefficient. It spawns a new process for every single field extraction. A much more performant approach is to use a single jq command to process the entire JSON array and format the output (e.g., as TSV), then pipe that to a single while read loop.
| local i=0 | |
| while [[ $i -lt $issue_count ]]; do | |
| local num title labels created comments assignee age_days | |
| num=$(printf '%s' "$issues_json" | jq -r ".[$i].number" 2>/dev/null || echo "?") | |
| title=$(printf '%s' "$issues_json" | jq -r ".[$i].title" 2>/dev/null || echo "?") | |
| labels=$(printf '%s' "$issues_json" | jq -r "[.[$i].labels[].name] | join(\", \")" 2>/dev/null || echo "") | |
| created=$(printf '%s' "$issues_json" | jq -r ".[$i].createdAt" 2>/dev/null || echo "") | |
| comments=$(printf '%s' "$issues_json" | jq -r ".[$i].comments | length" 2>/dev/null || echo "0") | |
| assignee=$(printf '%s' "$issues_json" | jq -r "[.[$i].assignees[].login] | join(\", \")" 2>/dev/null || echo "") | |
| # Calculate age in days | |
| if [[ -n "$created" && "$created" != "null" ]]; then | |
| local created_epoch now_epoch | |
| created_epoch=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$created" "+%s" 2>/dev/null || date -d "$created" "+%s" 2>/dev/null || echo 0) | |
| now_epoch=$(date "+%s") | |
| age_days=$(((now_epoch - created_epoch) / 86400)) | |
| else | |
| age_days="?" | |
| fi | |
| # Truncate title for table readability | |
| if [[ ${#title} -gt 60 ]]; then | |
| title="${title:0:57}..." | |
| fi | |
| output+="| #$num | $title | $labels | ${age_days}d | $comments | ${assignee:-none} |\n" | |
| i=$((i + 1)) | |
| done | |
| printf '%s' "$issues_json" | jq -r '.[] | [.number, .title, ([.labels[].name] | join(", ")), .createdAt, (.comments | length), ([.assignees[].login] | join(", "))] | @tsv' | | |
| while IFS=$'\t' read -r num title labels created comments assignee; do | |
| local age_days | |
| # Calculate age in days | |
| if [[ -n "$created" && "$created" != "null" ]]; then | |
| local created_epoch now_epoch | |
| created_epoch=$(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$created" "+%s" 2>/dev/null || date -d "$created" "+%s" 2>/dev/null || echo 0) | |
| now_epoch=$(date "+%s") | |
| age_days=$(((now_epoch - created_epoch) / 86400)) | |
| else | |
| age_days="?" | |
| fi | |
| # Truncate title for table readability | |
| if [[ ${#title} -gt 60 ]]; then | |
| title="${title:0:57}..." | |
| fi | |
| output+="| #$num | $title | $labels | ${age_days}d | $comments | ${assignee:-none} |\n" | |
| done |
References
- To efficiently process structured data in shell scripts, use a single command (like
jq) to extract and format all necessary data, then pipe it to a singlewhile readloop withIFSfor parsing. This reduces process overhead compared to calling the command multiple times in a loop.
| local i=0 | ||
| while [[ $i -lt $pr_count ]]; do | ||
| local num title state branch author reviews_summary ci_status | ||
| num=$(printf '%s' "$prs_json" | jq -r ".[$i].number" 2>/dev/null || echo "?") | ||
| title=$(printf '%s' "$prs_json" | jq -r ".[$i].title" 2>/dev/null || echo "?") | ||
| state=$(printf '%s' "$prs_json" | jq -r ".[$i].state" 2>/dev/null || echo "?") | ||
| branch=$(printf '%s' "$prs_json" | jq -r ".[$i].headRefName" 2>/dev/null || echo "?") | ||
| author=$(printf '%s' "$prs_json" | jq -r ".[$i].author.login" 2>/dev/null || echo "?") | ||
|
|
||
| # Summarise reviews | ||
| local approved rejected commented | ||
| approved=$(printf '%s' "$prs_json" | jq "[.[$i].reviews[] | select(.state==\"APPROVED\")] | length" 2>/dev/null || echo 0) | ||
| rejected=$(printf '%s' "$prs_json" | jq "[.[$i].reviews[] | select(.state==\"CHANGES_REQUESTED\")] | length" 2>/dev/null || echo 0) | ||
| commented=$(printf '%s' "$prs_json" | jq "[.[$i].reviews[] | select(.state==\"COMMENTED\")] | length" 2>/dev/null || echo 0) | ||
| reviews_summary="${approved}A/${rejected}R/${commented}C" | ||
|
|
||
| # CI status rollup | ||
| local success_count failure_count pending_count | ||
| success_count=$(printf '%s' "$prs_json" | jq "[.[$i].statusCheckRollup[] | select(.conclusion==\"SUCCESS\")] | length" 2>/dev/null || echo 0) | ||
| failure_count=$(printf '%s' "$prs_json" | jq "[.[$i].statusCheckRollup[] | select(.conclusion==\"FAILURE\")] | length" 2>/dev/null || echo 0) | ||
| pending_count=$(printf '%s' "$prs_json" | jq "[.[$i].statusCheckRollup[] | select(.conclusion==null or .conclusion==\"PENDING\")] | length" 2>/dev/null || echo 0) | ||
|
|
||
| if [[ "$failure_count" -gt 0 ]]; then | ||
| ci_status="FAIL($failure_count)" | ||
| elif [[ "$pending_count" -gt 0 ]]; then | ||
| ci_status="PENDING($pending_count)" | ||
| elif [[ "$success_count" -gt 0 ]]; then | ||
| ci_status="PASS($success_count)" | ||
| else | ||
| ci_status="none" | ||
| fi | ||
|
|
||
| # Truncate title | ||
| if [[ ${#title} -gt 50 ]]; then | ||
| title="${title:0:47}..." | ||
| fi | ||
|
|
||
| output+="| #$num | $title | $state | $branch | $author | $reviews_summary | $ci_status |\n" | ||
| i=$((i + 1)) | ||
| done |
There was a problem hiding this comment.
Similar to the issue in build_issues_context, this loop calls jq multiple times for each pull request, which is very inefficient. This can be optimized by using a single, more complex jq query to extract all necessary data at once and piping the result to a while read loop. This significantly reduces the number of spawned processes and improves performance.
printf '%s' "$prs_json" | jq -r '.[] | [.number, .title, .state, .headRefName, .author.login, ([.reviews[] | select(.state=="APPROVED")] | length), ([.reviews[] | select(.state=="CHANGES_REQUESTED")] | length), ([.reviews[] | select(.state=="COMMENTED")] | length), ([.statusCheckRollup[] | select(.conclusion=="SUCCESS")] | length), ([.statusCheckRollup[] | select(.conclusion=="FAILURE")] | length), ([.statusCheckRollup[] | select(.conclusion==null or .conclusion=="PENDING")] | length)] | @tsv' |
while IFS=$'\t' read -r num title state branch author approved rejected commented success_count failure_count pending_count; do
local reviews_summary="${approved}A/${rejected}R/${commented}C"
local ci_status
if [[ "$failure_count" -gt 0 ]]; then
ci_status="FAIL($failure_count)"
elif [[ "$pending_count" -gt 0 ]]; then
ci_status="PENDING($pending_count)"
elif [[ "$success_count" -gt 0 ]]; then
ci_status="PASS($success_count)"
else
ci_status="none"
fi
# Truncate title
if [[ ${#title} -gt 50 ]]; then
title="${title:0:47}..."
fi
output+="| #$num | $title | $state | $branch | $author | $reviews_summary | $ci_status |\n"
doneReferences
- To efficiently process structured data in shell scripts, use a single command (like
jq) to extract and format all necessary data, then pipe it to a singlewhile readloop withIFSfor parsing. This reduces process overhead compared to calling the command multiple times in a loop.
| local repo_path="${1:-$REPO_PATH}" | ||
| local scope="${2:-full}" |
There was a problem hiding this comment.
The way local variables are declared and assigned in a single line violates the repository's style guide. The style guide requires declaring and assigning local variables separately to ensure exit code safety, especially under set -e.
This pattern is repeated throughout the file in other functions. Please apply the fix consistently.
| local repo_path="${1:-$REPO_PATH}" | |
| local scope="${2:-full}" | |
| local repo_path | |
| repo_path="${1:-$REPO_PATH}" | |
| local scope | |
| scope="${2:-full}" |
References
- Rule 11: Use
local var="$1"pattern in functions (declare and assign separately for exit code safety). The current code combines declaration and assignment, which can mask exit codes from command substitutions ifset -eis active. (link)
| local context="" | ||
|
|
||
| # Header | ||
| context+="# AI Supervisor Context Snapshot\n\n" | ||
| context+="Generated: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\n" | ||
| context+="Repo: $(basename "$repo_path")\n" | ||
| context+="Scope: $scope\n\n" | ||
|
|
||
| # Section 1: Open GitHub Issues | ||
| context+="$(build_issues_context "$repo_path" "$scope")\n\n" | ||
|
|
||
| # Section 2: Recent PRs | ||
| context+="$(build_prs_context "$repo_path" "$scope")\n\n" | ||
|
|
||
| # Section 3: TODO.md State | ||
| context+="$(build_todo_context "$repo_path")\n\n" | ||
|
|
||
| # Section 4: Supervisor DB State | ||
| context+="$(build_db_context)\n\n" | ||
|
|
||
| # Section 5: Recent Worker Outcomes | ||
| context+="$(build_outcomes_context)\n\n" | ||
|
|
||
| # Section 6: Pattern Tracker Data (full scope only) | ||
| if [[ "$scope" == "full" ]]; then | ||
| context+="$(build_patterns_context)\n\n" | ||
| fi | ||
|
|
||
| # Section 7: Recent Memory Entries (full scope only) | ||
| if [[ "$scope" == "full" ]]; then | ||
| context+="$(build_memory_context)\n\n" | ||
| fi | ||
|
|
||
| # Section 8: Queue Health Metrics | ||
| context+="$(build_health_context)\n\n" | ||
|
|
||
| printf '%b' "$context" | ||
| return 0 |
There was a problem hiding this comment.
Building a large string by repeatedly using += is inefficient in Bash and can be slow. A more performant and readable approach is to use a subshell with printf to construct the entire string at once. This avoids the overhead of repeated string appends.
This change also makes the spacing between sections more consistent, as $(...) strips all trailing newlines from the sub-function output, and printf "%s\n\n" then reliably adds two newlines.
local context
context=$(cat <<EOF
# AI Supervisor Context Snapshot
Generated: $(date -u '+%Y-%m-%dT%H:%M:%SZ')
Repo: $(basename "$repo_path")
Scope: $scope
$(build_issues_context "$repo_path" "$scope")
$(build_prs_context "$repo_path" "$scope")
$(build_todo_context "$repo_path")
$(build_db_context)
$(build_outcomes_context)
$(if [[ "$scope" == "full" ]]; then
build_patterns_context
printf "\n\n"
build_memory_context
printf "\n\n"
fi)$(build_health_context)
EOF
)
printf '%s' "$context"
return 0| while IFS= read -r line; do | ||
| # Extract task ID and first 100 chars of description | ||
| local task_id desc | ||
| task_id=$(echo "$line" | grep -oE 't[0-9]+' | head -1) |
There was a problem hiding this comment.
Using echo "$line" | grep ... to extract a substring is inefficient as it creates a subshell and a new process. For this pattern, you can use Bash's built-in regex matching, which is more performant.
This pattern is repeated on lines 273, 274, and 290 and should be updated there as well.
| task_id=$(echo "$line" | grep -oE 't[0-9]+' | head -1) | |
| [[ "$line" =~ (t[0-9]+) ]] && task_id="${BASH_REMATCH[1]}" |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (5)
.agents/scripts/supervisor/ai-context.sh (5)
224-299: TODO.md parsing is well-structured with good fallback handling.The differentiation between top-level (
^- \[ \]) and indented tasks (^\s*- \[ \]) is intentional and correct. The|| trueguards preventset -efailures fromgrepno-match.One nit:
task_idis extracted on line 254 but unused — onlydesc(the truncated line) is emitted at line 259.Remove unused variable
while IFS= read -r line; do - # Extract task ID and first 100 chars of description - local task_id desc - task_id=$(echo "$line" | grep -oE 't[0-9]+' | head -1) - desc="${line:0:120}" + local desc="${line:0:120}" if [[ ${`#line`} -gt 120 ]]; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/ai-context.sh around lines 224 - 299, In build_todo_context(), the loop that builds the Open Tasks list declares and sets task_id but never uses it; either remove the unused task_id variable and its assignment or include it in the output (e.g., emit "- $task_id: $desc") so the extracted ID is utilized; update the loop that currently sets local task_id desc and assigns task_id=$(echo "$line" | grep -oE 't[0-9]+' | head -1) accordingly to eliminate the unused variable warning.
546-551: Arithmetic with potentially empty or whitespace-containing DB output.If
dbreturns a result with leading/trailing whitespace (sqlite3 can do this), the arithmetic on line 546 and the-gtcomparison on line 548 would fail. Consider trimming:Defensive trimming
+ completed_7d="${completed_7d//[[:space:]]/}" + failed_7d="${failed_7d//[[:space:]]/}" local total_7d=$((completed_7d + failed_7d))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/ai-context.sh around lines 546 - 551, Trim and normalize the DB-derived numeric variables before doing arithmetic: ensure completed_7d and failed_7d are stripped of leading/trailing whitespace and default to 0 when empty, then compute total_7d and perform the -gt check and awk percent using those sanitized values (refer to variables completed_7d, failed_7d, total_7d, and success_rate in ai-context.sh); update the code that sets completed_7d/failed_7d to sanitize input (e.g., remove whitespace and fall back to 0) so the later arithmetic and the if [[ "$total_7d" -gt 0 ]] check never fail on whitespace or empty DB output.
102-129: Heavy subprocess overhead: ~180jqinvocations for 30 issues.Each iteration spawns 6+
jqprocesses (one per field). For the maximum 30 issues, that's ~180 subprocesses. A singlejq -rcall can format all rows at once, which is both faster and more robust against pipe characters in titles breaking the markdown table.Single-pass jq formatting
- local i=0 - while [[ $i -lt $issue_count ]]; do - local num title labels created comments assignee age_days - num=$(printf '%s' "$issues_json" | jq -r ".[$i].number" 2>/dev/null || echo "?") - ... - i=$((i + 1)) - done + local now_epoch + now_epoch=$(date "+%s") + printf '%s' "$issues_json" | jq -r --arg now "$now_epoch" ' + .[] | + "| #\(.number) | \(.title[:57] + if (.title | length) > 60 then "..." else "" end) | \([.labels[].name] | join(", ")) | \( + (($now | tonumber) - (.createdAt | fromdateiso8601)) / 86400 | floor + )d | \(.comments | length) | \(([.assignees[].login] | join(", ")) // "none") |" + ' 2>/dev/null || trueThis also moves the date-age calculation into jq (which handles ISO8601 natively via
fromdateiso8601), eliminating the BSD/GNUdateportability concern on line 115.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/ai-context.sh around lines 102 - 129, The loop currently spawns multiple jq subprocesses per issue (num/title/labels/created/comments/assignee) and uses shell date arithmetic (created_epoch/now_epoch) — replace the per-field jq calls and the date math with a single jq invocation on issues_json that emits one preformatted table row per item (compute age_days in jq using (now|todate? or now - (.createdAt|fromdateiso8601))/86400, handle nulls with // defaults, and join assignees/labels safely), then append those lines to output; remove the created_epoch/now_epoch/date -d/-j logic and the repeated jq calls inside the while loop, and reference issues_json, age_days, title, labels, createdAt, comments, assignees and output when making the change.
25-67: Context accumulation viaprintf '%b'risks double-interpretation of backslash sequences.Each section builder already renders its output through
printf '%b', producing real newlines on stdout. The command substitution captures that rendered text. Thencontext+="$(…)\n\n"appends literal\n\nseparators, and the finalprintf '%b' "$context"re-interprets the entire buffer. If any GitHub issue title, PR title, or DB field contains a literal backslash sequence (e.g.\n,\t), it will be interpreted on this second pass.A safer approach: use real newlines throughout and finish with
printf '%s'.Sketch: avoid double-interpretation
build_ai_context() { local repo_path="${1:-$REPO_PATH}" local scope="${2:-full}" - local context="" - - # Header - context+="# AI Supervisor Context Snapshot\n\n" - context+="Generated: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\n" - context+="Repo: $(basename "$repo_path")\n" - context+="Scope: $scope\n\n" - - # Section 1: Open GitHub Issues - context+="$(build_issues_context "$repo_path" "$scope")\n\n" + # Print header directly — no accumulation buffer needed + printf '# AI Supervisor Context Snapshot\n\n' + printf 'Generated: %s\n' "$(date -u '+%Y-%m-%dT%H:%M:%SZ')" + printf 'Repo: %s\n' "$(basename "$repo_path")" + printf 'Scope: %s\n\n' "$scope" + + build_issues_context "$repo_path" "$scope" + printf '\n\n' ... - printf '%b' "$context" return 0 }Streaming each section directly to stdout avoids the need for
printf '%b'and the associated escape-sequence re-interpretation. The same refactor can be applied to each section builder.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/ai-context.sh around lines 25 - 67, The build_ai_context function currently collects section outputs and then calls printf '%b' "$context", which re-interprets backslash sequences and risks turning literal "\n" or "\t" from issue/PR titles into control chars; change the final output call to printf '%s' "$context" (or echo -n "$context") to print raw newlines, and audit any use of printf '%b' in the individual builders (build_issues_context, build_prs_context, build_todo_context, build_db_context, build_outcomes_context, build_patterns_context, build_memory_context, build_health_context) so they emit real newlines directly rather than relying on '%b' for escaping. Ensure you keep the existing "$(build_...)" substitutions and the "\n\n" separators as literal newlines (no further printf '%b' of the aggregated buffer).
175-214: Same per-iteration jq overhead applies here (~120 subprocesses for 20 PRs).Same pattern as the issues loop — consider a single-pass
jqapproach for PRs as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/ai-context.sh around lines 175 - 214, The per-PR loop repeatedly calls jq for each field (num, title, state, branch, author, review and status counts) causing many subprocesses; replace that with a single jq pass that emits one compact, delimited record per PR (e.g. jq -r '.[] | [(.number//"?"), (.title//"?"), .state, .headRefName, .author.login, ([.reviews[]? | select(.state=="APPROVED")] | length), ([.reviews[]? | select(.state=="CHANGES_REQUESTED")] | length), ([.reviews[]? | select(.state=="COMMENTED")] | length), ([.statusCheckRollup[]? | select(.conclusion=="SUCCESS")] | length), ([.statusCheckRollup[]? | select(.conclusion=="FAILURE")] | length), ([.statusCheckRollup[]? | select(.conclusion==null or .conclusion=="PENDING")] | length)] | `@tsv`') then read each line once to compute reviews_summary and ci_status; update the loop that currently indexes prs_json/pr_count and sets variables (num, title, state, branch, author, approved, rejected, commented, success_count, failure_count, pending_count, reviews_summary, ci_status) to parse the precomputed fields instead of calling jq repeatedly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/scripts/supervisor-helper.sh:
- Line 689: The new ai-context subcommand (invoking build_ai_context) is missing
from the script header usage block and the show_usage() function; update the
supervisor-helper.sh header usage text to list "ai-context" with a short
description consistent with other entries and add an entry for "ai-context" into
the show_usage() output (inside the show_usage() function) so it appears
alongside the other subcommands and mirrors their formatting and ordering,
referencing the ai-context label and the build_ai_context function name when
describing what it does.
In @.agents/scripts/supervisor/ai-context.sh:
- Around line 145-161: The header "## Recent Pull Requests (last 48h)" is
misleading because build_prs_context uses gh pr list with --limit but no date
filter; update build_prs_context so the gh pr list invocation (the prs_json
variable) includes a --search filter for PRs created or updated in the last 48
hours (use a portable date fallback to produce the ISO date for "48 hours ago"),
or alternatively change the section heading to "Recent Pull Requests (last
$limit)" and keep the original gh command; modify either the prs_json assignment
(the gh pr list call) or the output header string to make them consistent.
- Around line 436-443: The helper lookup in build_patterns_context and
build_memory_context uses "${SCRIPT_DIR}/pattern-tracker-helper.sh" and
"${SCRIPT_DIR}/memory-helper.sh" which resolves to supervisor/ when run
standalone, so add an extra probe that checks the parent scripts directory
(e.g., "${SCRIPT_DIR}/../pattern-tracker-helper.sh" and
"${SCRIPT_DIR}/../memory-helper.sh") before falling back to the deployed path;
update the logic for pattern_helper and memory_helper to test the
parent-location executable and use it if present (keep the current deployed-path
fallback unchanged).
- Around line 569-618: The --scope and --repo branches in the argument parsing
assume an argument exists and under set -u cause an unbound-variable crash;
update the while/case handling for the --scope and --repo cases in ai-context.sh
to validate that a following parameter is present before reading $2 (e.g., test
that $# -ge 2 or use a safe expansion like ${2:-} and fail with a clear
message), then assign to scope and repo_path and shift appropriately; on missing
value print the usage/help text and exit non-zero so users see a friendly error
instead of a cryptic unbound-variable exception.
---
Nitpick comments:
In @.agents/scripts/supervisor/ai-context.sh:
- Around line 224-299: In build_todo_context(), the loop that builds the Open
Tasks list declares and sets task_id but never uses it; either remove the unused
task_id variable and its assignment or include it in the output (e.g., emit "-
$task_id: $desc") so the extracted ID is utilized; update the loop that
currently sets local task_id desc and assigns task_id=$(echo "$line" | grep -oE
't[0-9]+' | head -1) accordingly to eliminate the unused variable warning.
- Around line 546-551: Trim and normalize the DB-derived numeric variables
before doing arithmetic: ensure completed_7d and failed_7d are stripped of
leading/trailing whitespace and default to 0 when empty, then compute total_7d
and perform the -gt check and awk percent using those sanitized values (refer to
variables completed_7d, failed_7d, total_7d, and success_rate in ai-context.sh);
update the code that sets completed_7d/failed_7d to sanitize input (e.g., remove
whitespace and fall back to 0) so the later arithmetic and the if [[ "$total_7d"
-gt 0 ]] check never fail on whitespace or empty DB output.
- Around line 102-129: The loop currently spawns multiple jq subprocesses per
issue (num/title/labels/created/comments/assignee) and uses shell date
arithmetic (created_epoch/now_epoch) — replace the per-field jq calls and the
date math with a single jq invocation on issues_json that emits one preformatted
table row per item (compute age_days in jq using (now|todate? or now -
(.createdAt|fromdateiso8601))/86400, handle nulls with // defaults, and join
assignees/labels safely), then append those lines to output; remove the
created_epoch/now_epoch/date -d/-j logic and the repeated jq calls inside the
while loop, and reference issues_json, age_days, title, labels, createdAt,
comments, assignees and output when making the change.
- Around line 25-67: The build_ai_context function currently collects section
outputs and then calls printf '%b' "$context", which re-interprets backslash
sequences and risks turning literal "\n" or "\t" from issue/PR titles into
control chars; change the final output call to printf '%s' "$context" (or echo
-n "$context") to print raw newlines, and audit any use of printf '%b' in the
individual builders (build_issues_context, build_prs_context,
build_todo_context, build_db_context, build_outcomes_context,
build_patterns_context, build_memory_context, build_health_context) so they emit
real newlines directly rather than relying on '%b' for escaping. Ensure you keep
the existing "$(build_...)" substitutions and the "\n\n" separators as literal
newlines (no further printf '%b' of the aggregated buffer).
- Around line 175-214: The per-PR loop repeatedly calls jq for each field (num,
title, state, branch, author, review and status counts) causing many
subprocesses; replace that with a single jq pass that emits one compact,
delimited record per PR (e.g. jq -r '.[] | [(.number//"?"), (.title//"?"),
.state, .headRefName, .author.login, ([.reviews[]? | select(.state=="APPROVED")]
| length), ([.reviews[]? | select(.state=="CHANGES_REQUESTED")] | length),
([.reviews[]? | select(.state=="COMMENTED")] | length), ([.statusCheckRollup[]?
| select(.conclusion=="SUCCESS")] | length), ([.statusCheckRollup[]? |
select(.conclusion=="FAILURE")] | length), ([.statusCheckRollup[]? |
select(.conclusion==null or .conclusion=="PENDING")] | length)] | `@tsv`') then
read each line once to compute reviews_summary and ci_status; update the loop
that currently indexes prs_json/pr_count and sets variables (num, title, state,
branch, author, approved, rejected, commented, success_count, failure_count,
pending_count, reviews_summary, ci_status) to parse the precomputed fields
instead of calling jq repeatedly.
| db) cmd_db "$@" ;; | ||
| labels) cmd_labels "$@" ;; | ||
| contest) cmd_contest "$@" ;; | ||
| ai-context) build_ai_context "${REPO_PATH:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}" "${1:-full}" ;; |
There was a problem hiding this comment.
ai-context subcommand missing from show_usage() and file header.
The new command works correctly but isn't listed in either the header usage block (lines 7–49) or the show_usage() function (lines 290–618). Every other subcommand appears in both. Add it for discoverability when users run supervisor-helper.sh help.
Proposed addition to show_usage()
Add alongside the other commands in show_usage():
supervisor-helper.sh labels [--action X] [--model Y] [--json] Query model usage labels (t1010)
+ supervisor-helper.sh ai-context [full|quick] Build AI reasoning context snapshot (t1085)
supervisor-helper.sh db [sql] Direct SQLite access🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/supervisor-helper.sh at line 689, The new ai-context
subcommand (invoking build_ai_context) is missing from the script header usage
block and the show_usage() function; update the supervisor-helper.sh header
usage text to list "ai-context" with a short description consistent with other
entries and add an entry for "ai-context" into the show_usage() output (inside
the show_usage() function) so it appears alongside the other subcommands and
mirrors their formatting and ordering, referencing the ai-context label and the
build_ai_context function name when describing what it does.
| build_prs_context() { | ||
| local repo_path="$1" | ||
| local scope="$2" | ||
| local limit=20 | ||
| [[ "$scope" == "quick" ]] && limit=10 | ||
|
|
||
| local output="## Recent Pull Requests (last 48h)\n\n" | ||
|
|
||
| if ! command -v gh &>/dev/null; then | ||
| output+="*gh CLI not available — skipping PR context*\n" | ||
| printf '%b' "$output" | ||
| return 0 | ||
| fi | ||
|
|
||
| # Get recent PRs (open + recently closed/merged) | ||
| local prs_json | ||
| prs_json=$(gh pr list --state all --limit "$limit" --json number,title,state,createdAt,mergedAt,closedAt,reviews,statusCheckRollup,headRefName,author 2>/dev/null || echo "[]") |
There was a problem hiding this comment.
Section header says "last 48h" but no date filter is applied.
gh pr list --state all --limit $limit returns the N most recent PRs regardless of age. If the repo is quiet, this could surface PRs from weeks ago; if it's busy, it might miss PRs older than a few hours.
Either filter by date (e.g. --search "created:>=$(date -u -v-48H '+%Y-%m-%d' 2>/dev/null || date -u -d '48 hours ago' '+%Y-%m-%d')") or adjust the section heading to match reality (e.g., "Recent Pull Requests (last $limit)").
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/supervisor/ai-context.sh around lines 145 - 161, The header
"## Recent Pull Requests (last 48h)" is misleading because build_prs_context
uses gh pr list with --limit but no date filter; update build_prs_context so the
gh pr list invocation (the prs_json variable) includes a --search filter for PRs
created or updated in the last 48 hours (use a portable date fallback to produce
the ISO date for "48 hours ago"), or alternatively change the section heading to
"Recent Pull Requests (last $limit)" and keep the original gh command; modify
either the prs_json assignment (the gh pr list call) or the output header string
to make them consistent.
| build_patterns_context() { | ||
| local output="## Pattern Tracker Summary\n\n" | ||
|
|
||
| local pattern_helper="${SCRIPT_DIR}/pattern-tracker-helper.sh" | ||
| if [[ ! -x "$pattern_helper" ]]; then | ||
| # Try deployed location | ||
| pattern_helper="$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh" | ||
| fi |
There was a problem hiding this comment.
Standalone mode: SCRIPT_DIR points to supervisor/, so sibling helper lookups miss.
When run directly, SCRIPT_DIR resolves to the supervisor/ subdirectory. The first lookup at line 439 (${SCRIPT_DIR}/pattern-tracker-helper.sh) and line 471 (${SCRIPT_DIR}/memory-helper.sh) search inside supervisor/ rather than the parent .agents/scripts/. The deployed-path fallback (line 442, 473) may or may not exist.
Adding a parent-directory probe would improve standalone reliability:
Proposed fix
build_patterns_context() {
local output="## Pattern Tracker Summary\n\n"
local pattern_helper="${SCRIPT_DIR}/pattern-tracker-helper.sh"
+ if [[ ! -x "$pattern_helper" ]]; then
+ # When run from supervisor/ subdir, try parent
+ pattern_helper="${SCRIPT_DIR}/../pattern-tracker-helper.sh"
+ fi
if [[ ! -x "$pattern_helper" ]]; then
# Try deployed location
pattern_helper="$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh"
fiApply the same pattern for memory_helper in build_memory_context.
Also applies to: 468-474
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/supervisor/ai-context.sh around lines 436 - 443, The helper
lookup in build_patterns_context and build_memory_context uses
"${SCRIPT_DIR}/pattern-tracker-helper.sh" and "${SCRIPT_DIR}/memory-helper.sh"
which resolves to supervisor/ when run standalone, so add an extra probe that
checks the parent scripts directory (e.g.,
"${SCRIPT_DIR}/../pattern-tracker-helper.sh" and
"${SCRIPT_DIR}/../memory-helper.sh") before falling back to the deployed path;
update the logic for pattern_helper and memory_helper to test the
parent-location executable and use it if present (keep the current deployed-path
fallback unchanged).
| if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then | ||
| set -euo pipefail | ||
| # When run standalone, source common helpers | ||
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | ||
| # shellcheck source=_common.sh | ||
| source "$SCRIPT_DIR/_common.sh" | ||
|
|
||
| # Colour codes (may not be set when run standalone) | ||
| BLUE="${BLUE:-\033[0;34m}" | ||
| GREEN="${GREEN:-\033[0;32m}" | ||
| YELLOW="${YELLOW:-\033[1;33m}" | ||
| RED="${RED:-\033[0;31m}" | ||
| NC="${NC:-\033[0m}" | ||
|
|
||
| # Default paths | ||
| SUPERVISOR_DB="${SUPERVISOR_DB:-$HOME/.aidevops/.agent-workspace/supervisor/supervisor.db}" | ||
| SUPERVISOR_LOG="${SUPERVISOR_LOG:-$HOME/.aidevops/.agent-workspace/supervisor/cron.log}" | ||
| REPO_PATH="${REPO_PATH:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}" | ||
|
|
||
| # Parse args | ||
| scope="full" | ||
| repo_path="$REPO_PATH" | ||
|
|
||
| while [[ $# -gt 0 ]]; do | ||
| case "$1" in | ||
| --scope) | ||
| scope="$2" | ||
| shift 2 | ||
| ;; | ||
| --repo) | ||
| repo_path="$2" | ||
| shift 2 | ||
| ;; | ||
| --help | -h) | ||
| echo "Usage: ai-context.sh [--scope full|quick] [--repo /path]" | ||
| echo "" | ||
| echo "Build AI supervisor context document for reasoning engine." | ||
| echo "" | ||
| echo "Options:" | ||
| echo " --scope full|quick Context depth (default: full)" | ||
| echo " --repo /path Repository path (default: git root)" | ||
| echo " --help Show this help" | ||
| exit 0 | ||
| ;; | ||
| *) | ||
| echo "Unknown option: $1" >&2 | ||
| exit 1 | ||
| ;; | ||
| esac | ||
| done |
There was a problem hiding this comment.
CLI --scope/--repo silently crash with an unbound-variable error when the value is omitted.
Running ai-context.sh --scope (no argument) triggers set -u on $2 at line 595 with a cryptic "unbound variable" message instead of a user-friendly error. As per coding guidelines, automation scripts should have clear logging and feedback.
Add argument guards
--scope)
+ if [[ $# -lt 2 ]]; then
+ echo "Error: --scope requires a value (full|quick)" >&2
+ exit 1
+ fi
scope="$2"
shift 2
;;
--repo)
+ if [[ $# -lt 2 ]]; then
+ echo "Error: --repo requires a path" >&2
+ exit 1
+ fi
repo_path="$2"
shift 2
;;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then | |
| set -euo pipefail | |
| # When run standalone, source common helpers | |
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |
| # shellcheck source=_common.sh | |
| source "$SCRIPT_DIR/_common.sh" | |
| # Colour codes (may not be set when run standalone) | |
| BLUE="${BLUE:-\033[0;34m}" | |
| GREEN="${GREEN:-\033[0;32m}" | |
| YELLOW="${YELLOW:-\033[1;33m}" | |
| RED="${RED:-\033[0;31m}" | |
| NC="${NC:-\033[0m}" | |
| # Default paths | |
| SUPERVISOR_DB="${SUPERVISOR_DB:-$HOME/.aidevops/.agent-workspace/supervisor/supervisor.db}" | |
| SUPERVISOR_LOG="${SUPERVISOR_LOG:-$HOME/.aidevops/.agent-workspace/supervisor/cron.log}" | |
| REPO_PATH="${REPO_PATH:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}" | |
| # Parse args | |
| scope="full" | |
| repo_path="$REPO_PATH" | |
| while [[ $# -gt 0 ]]; do | |
| case "$1" in | |
| --scope) | |
| scope="$2" | |
| shift 2 | |
| ;; | |
| --repo) | |
| repo_path="$2" | |
| shift 2 | |
| ;; | |
| --help | -h) | |
| echo "Usage: ai-context.sh [--scope full|quick] [--repo /path]" | |
| echo "" | |
| echo "Build AI supervisor context document for reasoning engine." | |
| echo "" | |
| echo "Options:" | |
| echo " --scope full|quick Context depth (default: full)" | |
| echo " --repo /path Repository path (default: git root)" | |
| echo " --help Show this help" | |
| exit 0 | |
| ;; | |
| *) | |
| echo "Unknown option: $1" >&2 | |
| exit 1 | |
| ;; | |
| esac | |
| done | |
| if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then | |
| set -euo pipefail | |
| # When run standalone, source common helpers | |
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |
| # shellcheck source=_common.sh | |
| source "$SCRIPT_DIR/_common.sh" | |
| # Colour codes (may not be set when run standalone) | |
| BLUE="${BLUE:-\033[0;34m}" | |
| GREEN="${GREEN:-\033[0;32m}" | |
| YELLOW="${YELLOW:-\033[1;33m}" | |
| RED="${RED:-\033[0;31m}" | |
| NC="${NC:-\033[0m}" | |
| # Default paths | |
| SUPERVISOR_DB="${SUPERVISOR_DB:-$HOME/.aidevops/.agent-workspace/supervisor/supervisor.db}" | |
| SUPERVISOR_LOG="${SUPERVISOR_LOG:-$HOME/.aidevops/.agent-workspace/supervisor/cron.log}" | |
| REPO_PATH="${REPO_PATH:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}" | |
| # Parse args | |
| scope="full" | |
| repo_path="$REPO_PATH" | |
| while [[ $# -gt 0 ]]; do | |
| case "$1" in | |
| --scope) | |
| if [[ $# -lt 2 ]]; then | |
| echo "Error: --scope requires a value (full|quick)" >&2 | |
| exit 1 | |
| fi | |
| scope="$2" | |
| shift 2 | |
| ;; | |
| --repo) | |
| if [[ $# -lt 2 ]]; then | |
| echo "Error: --repo requires a path" >&2 | |
| exit 1 | |
| fi | |
| repo_path="$2" | |
| shift 2 | |
| ;; | |
| --help | -h) | |
| echo "Usage: ai-context.sh [--scope full|quick] [--repo /path]" | |
| echo "" | |
| echo "Build AI supervisor context document for reasoning engine." | |
| echo "" | |
| echo "Options:" | |
| echo " --scope full|quick Context depth (default: full)" | |
| echo " --repo /path Repository path (default: git root)" | |
| echo " --help Show this help" | |
| exit 0 | |
| ;; | |
| *) | |
| echo "Unknown option: $1" >&2 | |
| exit 1 | |
| ;; | |
| esac | |
| done |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/supervisor/ai-context.sh around lines 569 - 618, The --scope
and --repo branches in the argument parsing assume an argument exists and under
set -u cause an unbound-variable crash; update the while/case handling for the
--scope and --repo cases in ai-context.sh to validate that a following parameter
is present before reading $2 (e.g., test that $# -ge 2 or use a safe expansion
like ${2:-} and fail with a clear message), then assign to scope and repo_path
and shift appropriately; on missing value print the usage/help text and exit
non-zero so users see a friendly error instead of a cryptic unbound-variable
exception.



Summary
supervisor/ai-context.sh— assembles comprehensive project snapshot for AI reasoning engine--scope full|quickfor context depth control (quick ~4K tokens, full ~5K tokens)bash ai-context.sh --scope quick --repo /pathsupervisor-helper.sh ai-context [full|quick]Context
This is Phase 1 of the Supervisor Intelligence Upgrade (t1085). The context builder provides the foundation that the AI reasoning engine (t1085.2) will use to make decisions about issue triage, task creation, and project management.
Testing
TODO Reference
Summary by CodeRabbit
Release Notes
ai-contextcommand to generate comprehensive project and supervisor state snapshots, including GitHub issues, recent pull requests, task status, worker outcomes, and queue health metrics. Supports full and quick scope modes for flexible reporting.