Skip to content

fix: revert 43 falsely marked-complete tasks in TODO.md#615

Closed
alex-solovyev wants to merge 4 commits intomainfrom
bugfix/supervisor-db-safety
Closed

fix: revert 43 falsely marked-complete tasks in TODO.md#615
alex-solovyev wants to merge 4 commits intomainfrom
bugfix/supervisor-db-safety

Conversation

@alex-solovyev
Copy link
Collaborator

@alex-solovyev alex-solovyev commented Feb 8, 2026

Summary

  • Reverts 43 tasks that were incorrectly marked as completed:2026-02-08 without actual work being done on them
  • Only t162 (supervisor DB safety) and t073 (document extraction helper) remain marked complete — those had real code written
  • Other tasks may have been completed by worker branches but need proper verification before marking done

Summary by CodeRabbit

  • New Features

    • Document extraction with privacy-preserving PII scanning and batch processing
    • Cloud and local voice agent support
  • Documentation

    • Added comprehensive SEO content workflow guide
    • Updated Uncloud deployment documentation with simplified setup
    • New cloud voice agents documentation
    • Enhanced document extraction tool reference
  • Infrastructure

    • Improved database migration safety with backup and data preservation

…ckup before ALTER TABLE (t162)

- t128.8 migration now detects existing optional columns (issue_url,
  diagnostic_of, triage_result) and includes them in INSERT to prevent
  data loss when migrations run out of order
- New table schema includes all columns upfront (no separate ALTER needed)
- Added backup_db() call before ALTER TABLE migration group
- Unquoted heredoc delimiter allows variable expansion for dynamic SQL
- Marked 18 already-implemented tasks as complete in TODO.md:
  t158, t160, t162 (supervisor bugs), t071, t072 (voice),
  t074-t078 (content/tools), t095-t098, t100-t101 (mobile/UX),
  t109 (parallel agents), t112, t114 (voice shortcuts),
  t120, t124 (research/security)
…w subagents (t073, t012, t016, t017)

- document-extraction-helper.sh: CLI for Docling+ExtractThinker+Presidio
  pipeline with DocStrange alternative. Commands: extract, parse, pii-scan,
  batch, status, install. Privacy modes: local/edge/cloud/none.
- tools/deployment/uncloud.md: Multi-machine Docker deployment subagent
- seo/seo-content-workflow.md: SEO content creation pipeline with
  readability scoring, keyword density, SEO quality score (0-100),
  search intent classification
- Closes t073 (document extraction), t012 (OCR invoice), t016 (uncloud),
  t017 (SEO machine)
…080, t081)

- cloud-voice-agents.md: Covers OpenAI Realtime, AWS Nova Sonic, Gemini
  Multimodal Live, Ultravox, NVIDIA Nemotron, MiniCPM-o (local).
  Provider comparison table, Pipecat integration examples, use case matrix.
- t081 covered by existing pipecat-opencode.md (Soniox+Cartesia+LLM pipeline)
Reverted tasks marked completed:2026-02-08 that were not actually
worked on in this session. Only t162 and t073 remain as completed
(actual code written). Other tasks may have been done by workers
in feature branches but should be verified and marked by a reviewer.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

Walkthrough

This PR introduces a new document extraction helper script with privacy-preserving processing pipelines, enhances supervisor migration logic for dynamic schema evolution, and adds comprehensive documentation for SEO workflows and voice agents. Documentation is streamlined while task tracking is updated with completion statuses.

Changes

Cohort / File(s) Summary
Document Processing Pipeline
.agents/scripts/document-extraction-helper.sh
New 558-line Bash script implementing multi-command document processing: extract (DocLing/ExtractThinker/Presidio or DocStrange), parse (Markdown/JSON/HTML conversion), pii-scan (PII detection), batch (directory processing), plus status, install, and help commands. Includes environment setup, virtualenv management, and component installation with comprehensive error handling.
Schema Migration Enhancement
.agents/scripts/supervisor-helper.sh
Dynamic data-preserving migration logic for tasks and batches tables. Detects existing columns (issue_url, diagnostic_of, triage_result, release_on_complete) at runtime; conditionally adds them with indices. Adds pre-migration backup safety checks and builds dynamic INSERT/SELECT column lists to preserve data across partial schema states.
SEO & Voice Agent Documentation
.agents/seo/seo-content-workflow.md, .agents/tools/voice/cloud-voice-agents.md
New documentation: SEO workflow defines 5-stage pipeline (Research/Write/Optimize/Analyze/Rewrite) with context systems, tool integrations, and scoring frameworks. Voice agents doc covers S2S providers (OpenAI, AWS, Gemini, Ultravox, etc.) via Pipecat with setup examples and capabilities.
Documentation Refactoring & Updates
.agents/tools/deployment/uncloud.md, .agents/tools/document/document-extraction.md
Uncloud doc condensed from verbose provider guide to feature-forward reference (key features, CLI commands, setup). Document extraction doc updated with CLI script naming and accepted commands.
Task Status Tracking
TODO.md
Multiple task status transitions (backlog ↔ completed), completion date annotations, re-scoped dependencies, and implementation notes (e.g., t158, t160, t162 marked "Already implemented"; Uncloud integration and Gopass changes reflected).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Poem

📄 Scripts extract, parse, and scan with care,
Privacy shielded through Presidio's snare,
Schemas evolve without losing their grace,
While SEO and voices find their rightful place! 🎤✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely describes the main change: reverting 43 falsely marked-complete tasks in TODO.md, which aligns with the substantial TODO.md updates in the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bugfix/supervisor-db-safety

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @alex-solovyev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on rectifying the TODO.md task list by unmarking 43 tasks that were incorrectly flagged as complete. Concurrently, it finalizes and marks as complete two significant tasks: enhancing supervisor database migration safety and introducing a new helper for privacy-preserving document extraction. Additionally, new documentation for SEO content workflows and cloud voice agents has been added, and the Uncloud deployment guide has been streamlined.

Highlights

  • TODO.md Task Status Correction: Reverted the 'completed' status for 43 tasks in TODO.md that were prematurely marked, ensuring the task list accurately reflects ongoing work.
  • Supervisor Database Migration Safety (t162): Implemented robust database migration logic in supervisor-helper.sh to dynamically detect and preserve existing columns during schema changes, preventing data loss and marking task t162 as complete.
  • Document Extraction Helper (t073): Introduced a new document-extraction-helper.sh script for privacy-preserving document extraction, supporting Docling, ExtractThinker, and Presidio, and marking task t073 as complete. Corresponding documentation in document-extraction.md was updated.
  • New SEO Content Workflow Documentation: Added comprehensive documentation for an SEO content creation workflow (seo-content-workflow.md), outlining research, writing, optimization, analysis, and rewriting phases, along with detailed analysis functions.
  • Simplified Uncloud Deployment Guide: Refactored and simplified the Uncloud deployment guide (uncloud.md), focusing on key features, CLI commands, and setup for multi-machine Docker deployments.
  • Cloud Voice Agents Documentation: Added new documentation for various cloud and local speech-to-speech (S2S) voice agent providers via Pipecat (cloud-voice-agents.md), including provider comparisons and code examples.
Changelog
  • .agents/scripts/document-extraction-helper.sh
    • Added a new Bash script providing a CLI for privacy-preserving document extraction, supporting Docling, ExtractThinker, and Presidio, with various commands like extract, parse, pii-scan, batch, status, and install.
  • .agents/scripts/supervisor-helper.sh
    • Modified the ensure_db function to dynamically detect and include optional columns (issue_url, diagnostic_of, triage_result) during tasks table schema migrations, preventing data loss.
    • Introduced a conditional backup mechanism (backup_db) before ALTER TABLE operations to enhance database safety during migrations.
  • .agents/seo/seo-content-workflow.md
    • Added new documentation detailing an end-to-end SEO content creation workflow, including research, writing, optimization, analysis, and rewriting stages.
    • Included sections on readability scoring, keyword density, SEO quality scoring, and search intent classification.
  • .agents/tools/deployment/uncloud.md
    • Significantly simplified the Uncloud deployment guide, removing verbose sections on configuration, installation, and troubleshooting.
    • Focused the content on a quick reference, key features, CLI commands, setup instructions, and an example Docker Compose file.
  • .agents/tools/document/document-extraction.md
    • Updated the 'Status' field to reflect the implementation of the CLI helper script, now pointing to document-extraction-helper.sh.
  • .agents/tools/voice/cloud-voice-agents.md
    • Added new documentation for integrating various cloud and local speech-to-speech (S2S) voice agent providers (e.g., OpenAI Realtime, AWS Nova Sonic, Gemini Multimodal Live) using the Pipecat framework.
  • TODO.md
    • Reverted the 'completed' status for 43 tasks that were incorrectly marked as done.
    • Marked task t162 (supervisor DB safety) as complete, with updated notes reflecting the implemented dynamic column migration.
    • Marked task t073 (document extraction helper) as complete, with updated notes detailing the new helper script and subagents.
    • Updated notes for several tasks (t158, t160, t012, t013, t016, t017, t112, t114, t120) to clarify their current status or implementation details.
Activity
  • The pull request author, alex-solovyev, initiated a significant cleanup of the TODO.md file, reverting the 'completed' status for 43 tasks.
  • Two tasks, t162 (supervisor DB safety) and t073 (document extraction helper), were confirmed as genuinely completed and their status updated accordingly.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily reverts a large number of tasks in TODO.md that were incorrectly marked as complete, which aligns with the PR's title and description. It also includes the implementation for t162 (supervisor DB safety) and t073 (document extraction helper), which are correctly marked as complete.

The changes to supervisor-helper.sh for database migration safety are robust and well-implemented.

I've reviewed the new document-extraction-helper.sh script and found a few areas for improvement:

  • A bug in the install all command where one of the components is missed.
  • A potential security vulnerability and robustness issue in how command-line arguments are processed to generate Python code.
  • A minor usability issue in the batch processing command where error messages are suppressed, making debugging difficult.

I've added specific comments with suggestions for these points. The other new documentation files look good.

Comment on lines +351 to +374
local fields_arg=""
if [[ -n "$fields" ]]; then
fields_arg="specified_fields=[$(echo "$fields" | sed "s/[^,]*/'&'/g")]"
fi

local result
result=$(python3 -c "
from docstrange import DocumentExtractor
import json

extractor = DocumentExtractor()
result = extractor.extract('$input_file')

if '$fields':
data = result.extract_data($fields_arg)
elif '$schema':
import json as j
with open('$schema') as f:
schema = j.load(f)
data = result.extract_data(json_schema=schema)
else:
data = result.extract_data()

print(json.dumps(data, indent=2, default=str))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Constructing Python code by manipulating strings in shell is brittle and a potential security risk (code injection). A safer and more robust approach is to pass the raw comma-separated fields string into the Python script and parse it there. This avoids issues with special characters in field names and is more readable.

    local result
    result=$(python3 -c "
from docstrange import DocumentExtractor
import json

extractor = DocumentExtractor()
result = extractor.extract('$input_file')

fields_str = '$fields'
schema_file = '$schema'

if fields_str:
    fields_list = [f.strip() for f in fields_str.split(',')]
    data = result.extract_data(specified_fields=fields_list)
elif schema_file:
    import json as j
    with open(schema_file) as f:
        schema = j.load(f)
    data = result.extract_data(json_schema=schema)
else:
    data = result.extract_data()

print(json.dumps(data, indent=2, default=str))
"

Comment on lines +81 to +83
check_deps docling || true
check_deps extractthinker || true
check_deps presidio || true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The all case in check_deps is missing the docstrange component. This means install all will not attempt to install it, which is likely not the intended behavior.

Suggested change
check_deps docling || true
check_deps extractthinker || true
check_deps presidio || true
check_deps docling || true
check_deps extractthinker || true
check_deps presidio || true
check_deps docstrange || true

basename=$(basename "$file" | sed 's/\.[^.]*$//')
print_info "Processing: $file"

if cmd_extract "$file" --fields "${fields:-}" --output-file "$output_dir/${basename}.json" 2>/dev/null; then

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Error output from cmd_extract is being suppressed with 2>/dev/null. This will make it very difficult to debug failures during batch processing, as the user will only see 'Failed: ' without any specific error message. It's better to let the error messages print to stderr.

Suggested change
if cmd_extract "$file" --fields "${fields:-}" --output-file "$output_dir/${basename}.json" 2>/dev/null; then
if cmd_extract "$file" --fields "${fields:-}" --output-file "$output_dir/${basename}.json"; then

@alex-solovyev
Copy link
Collaborator Author

Closing: branch has stale commits from already-merged PR #598. Will recreate with clean diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In @.agents/scripts/document-extraction-helper.sh:
- Around line 170-187: The Python -c blocks interpolate unsanitized shell
variables (e.g., converter.convert('$input_file')) which allows command
injection and breaks on single quotes; update each invocation (cmd_parse,
cmd_pii_scan, _extract_docstrange, _extract_pipeline and any python3 -c usage)
to pass data via environment variables (e.g., DOC_INPUT_FILE, DOC_OUTPUT_FORMAT,
etc.) or use a here-doc that does not perform shell interpolation, and then read
those values inside Python using os.environ; ensure every occurrence of direct
'$variable' interpolation is removed and replaced with env var reads to safely
transfer input_file, output_format, language, fields, schema, llm, privacy, etc.

In @.agents/scripts/supervisor-helper.sh:
- Around line 638-650: The migration backup gating currently checks
has_max_load, has_release_on_complete, has_diagnostic_of and has_issue_url but
omits the triage_result column, so add a new variable (e.g., has_triage_result)
populated with the same db(...) pragma_table_info check against the tasks table
for 'triage_result' and include it in the needs_alter_migration conditional
alongside has_max_load/has_release_on_complete/has_diagnostic_of/has_issue_url
so backup_db("pre-migrate-alter-columns") is invoked whenever triage_result is
missing; use the same patterns and error fallback (|| echo "0") and reference
SUPERVISOR_DB, db, needs_alter_migration, and backup_db when making the change.

In @.agents/tools/deployment/uncloud.md:
- Line 16: Replace the unsafe piped installer string "curl -fsSL
https://get.uncloud.dev | sh" in the Quick Reference with the safer
download-then-execute pattern used in the Setup section (or at minimum append a
clear note showing the safer alternative), ensuring the Quick Reference entry
matches the Setup section's approach; update the text that currently says the
piped command so it either shows the download-then-execute steps or adds a
parenthetical warning and the safer command, keeping the same meaning but
removing the direct "curl | sh" invocation.
- Around line 82-111: The deployment example uses Docker Swarm fields (deploy,
deploy.placement.constraints, node.labels) which Uncloud ignores; replace the
deploy blocks for services (e.g., services:web, services:api, services:db) with
Uncloud's x-machines extension by removing deploy (including
replicas/placement/constraints) and adding an x-machines list for each service
(e.g., x-machines: - prod-1) to target specific machines added via uc machine
add; ensure replicas and swarm-specific keys are removed since Uncloud uses
x-machines for placement.

In @.agents/tools/voice/cloud-voice-agents.md:
- Around line 25-33: The table currently contains hardcoded, stale metrics and
exact model version names (e.g., entries for "**OpenAI Realtime**", "**MiniCPM-o
4.5**", "Pipecat" references and specific model strings like
`gpt-4o-realtime-preview`, `amazon.nova-sonic-v1:0`, `gemini-2.0-flash-exp`) —
replace any exact star counts with approximate ranges (e.g., "~10k", "~20k+") or
omit them, remove or generalize exact model-version strings and instead point
readers to the Pipecat/docs links already referenced elsewhere in the file, and
where appropriate replace static repo star counts with upstream repo links; keep
qualitative metrics (latency, cost symbols, quality) as-is and ensure all
modifications are applied to the rows and lines called out (including the
MiniCPM-o and Pipecat mentions).

In `@TODO.md`:
- Around line 58-59: The markdown MD038 warning is caused by a code span with
surrounding spaces in the task note (` -- description`); update the TODO.md
entry to remove the leading/trailing space inside the code span so it reads `--
description` (or move backticks to exclude the spaces), and verify related
mentions in build_dispatch_cmd, full-loop.md Step 0, and full-loop-helper.sh
still match the trimmed token to avoid mismatched documentation references.
- Line 212: Two tasks share duplicate IDs (t131 and t132) which breaks the TODO
spec that IDs are stable; locate the duplicate entries (the "gopass Integration
& Credentials Rename" task and the "Create tools/vision/ category for visual AI
models" task referenced as t131, and the two tasks using t132), assign new
unique IDs for the later-occurring tasks, update the task ID strings in the
TODO.md entries and any cross-references (e.g.,
todo/tasks/prd-gopass-credentials.md, todo/PLANS.md anchors, and any
issue-sync-helper or supervisor DB references), and run the repo-wide search for
"t131" and "t132" to replace only the correct occurrences so tooling (supervisor
DB, issue-sync-helper, beads-sync) points to the intended tasks.
🧹 Nitpick comments (5)
.agents/tools/voice/cloud-voice-agents.md (1)

39-180: Replace non-authoritative code snippets with file:line references or cite official docs.

This doc includes multiple code examples, but there’s no authoritative source citation. The guideline for .agents/**/*.md asks for code examples only when authoritative; otherwise reference files/lines. Consider replacing these snippets with file:line pointers or citing the official Pipecat docs they’re drawn from.
As per coding guidelines: "Use code examples only when authoritative, otherwise use file:line references".

.agents/seo/seo-content-workflow.md (1)

32-122: Convert illustrative code blocks to prose or file:line references.

The /research and /write examples plus the “python” blocks are illustrative and not tied to an authoritative source. The guideline prefers file:line references unless the example is authoritative.
As per coding guidelines: "Use code examples only when authoritative, otherwise use file:line references".

.agents/scripts/document-extraction-helper.sh (3)

350-354: Use bash parameter expansion instead of sed for field formatting.

Per the Codacy hint, replace the sed call with bash string manipulation:

Proposed fix
     local fields_arg=""
     if [[ -n "$fields" ]]; then
-        fields_arg="specified_fields=[$(echo "$fields" | sed "s/[^,]*/'&'/g")]"
+        local quoted_fields="'${fields//,/\',\'}'"
+        fields_arg="specified_fields=[${quoted_fields}]"
     fi

Note: This also has the injection issue flagged above — field values containing quotes will break the Python code. The env-var approach resolves both.


536-556: Redundant return $? on lines 144, 248, and 555.

After case ... esac or an if/else block, the function already returns the exit code of the last executed command. The explicit return $? is a no-op. Not harmful, but unnecessary noise.


282-293: Batch command suppresses extraction errors with 2>/dev/null.

Line 287 redirects stderr from cmd_extract to /dev/null, which hides the root cause of failures. The failed counter increments but the user gets no diagnostic info beyond "Failed: $file". Consider logging stderr to a file in $output_dir instead.

Proposed fix
-        if cmd_extract "$file" --fields "${fields:-}" --output-file "$output_dir/${basename}.json" 2>/dev/null; then
+        if cmd_extract "$file" --fields "${fields:-}" --output-file "$output_dir/${basename}.json" 2>"$output_dir/${basename}.err"; then
             count=$((count + 1))
+            rm -f "$output_dir/${basename}.err"
         else
             failed=$((failed + 1))
-            print_warning "Failed: $file"
+            print_warning "Failed: $file (see $output_dir/${basename}.err)"
         fi

Comment on lines +170 to +187
ensure_venv || return 1
local result
result=$(python3 -c "
from docling.document_converter import DocumentConverter
converter = DocumentConverter()
result = converter.convert('$input_file')
fmt = '$output_format'
if fmt == 'json':
import json
print(json.dumps(result.document.export_to_dict(), indent=2, default=str))
elif fmt == 'html':
print(result.document.export_to_html())
else:
print(result.document.export_to_markdown())
" 2>&1) || {
print_error "Docling parsing failed: $result"
return 1
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Command injection via unsanitized shell variables in Python string interpolation.

Shell variables like $input_file, $output_format, $language, $fields, $schema, $llm, and $privacy are interpolated directly into Python code strings using single-quote substitution (e.g., converter.convert('$input_file')). A filename containing a single quote (e.g., O'Brien's Report.pdf) will break the Python syntax. A maliciously crafted filename could execute arbitrary Python code.

This pattern recurs in cmd_parse (lines 172-184), cmd_pii_scan (lines 222-246), _extract_docstrange (lines 357-375), and _extract_pipeline (lines 403-448).

Fix by passing values via environment variables and reading them with os.environ in Python:

Example fix for cmd_parse
     result=$(python3 -c "
+import os
 from docling.document_converter import DocumentConverter
 converter = DocumentConverter()
-result = converter.convert('$input_file')
-fmt = '$output_format'
+result = converter.convert(os.environ['DOC_INPUT_FILE'])
+fmt = os.environ['DOC_OUTPUT_FORMAT']
 if fmt == 'json':
     import json
     print(json.dumps(result.document.export_to_dict(), indent=2, default=str))
 elif fmt == 'html':
     print(result.document.export_to_html())
 else:
     print(result.document.export_to_markdown())
-" 2>&1) || {
+" DOC_INPUT_FILE="$input_file" DOC_OUTPUT_FORMAT="$output_format" 2>&1) || {

Alternatively, prefix the python3 call:

DOC_INPUT_FILE="$input_file" DOC_OUTPUT_FORMAT="$output_format" python3 -c "..."

Apply the same pattern to all Python -c blocks in this script.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ensure_venv || return 1
local result
result=$(python3 -c "
from docling.document_converter import DocumentConverter
converter = DocumentConverter()
result = converter.convert('$input_file')
fmt = '$output_format'
if fmt == 'json':
import json
print(json.dumps(result.document.export_to_dict(), indent=2, default=str))
elif fmt == 'html':
print(result.document.export_to_html())
else:
print(result.document.export_to_markdown())
" 2>&1) || {
print_error "Docling parsing failed: $result"
return 1
}
ensure_venv || return 1
local result
result=$(DOC_INPUT_FILE="$input_file" DOC_OUTPUT_FORMAT="$output_format" python3 -c "
import os
from docling.document_converter import DocumentConverter
converter = DocumentConverter()
result = converter.convert(os.environ['DOC_INPUT_FILE'])
fmt = os.environ['DOC_OUTPUT_FORMAT']
if fmt == 'json':
import json
print(json.dumps(result.document.export_to_dict(), indent=2, default=str))
elif fmt == 'html':
print(result.document.export_to_html())
else:
print(result.document.export_to_markdown())
" 2>&1) || {
print_error "Docling parsing failed: $result"
return 1
}
🤖 Prompt for AI Agents
In @.agents/scripts/document-extraction-helper.sh around lines 170 - 187, The
Python -c blocks interpolate unsanitized shell variables (e.g.,
converter.convert('$input_file')) which allows command injection and breaks on
single quotes; update each invocation (cmd_parse, cmd_pii_scan,
_extract_docstrange, _extract_pipeline and any python3 -c usage) to pass data
via environment variables (e.g., DOC_INPUT_FILE, DOC_OUTPUT_FORMAT, etc.) or use
a here-doc that does not perform shell interpolation, and then read those values
inside Python using os.environ; ensure every occurrence of direct '$variable'
interpolation is removed and replaced with env var reads to safely transfer
input_file, output_format, language, fields, schema, llm, privacy, etc.

Comment on lines +638 to +650
# Backup before ALTER TABLE migrations if any are needed (t162)
local needs_alter_migration=false
local has_max_load has_release_on_complete has_diagnostic_of has_issue_url
has_max_load=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='max_load_factor';" 2>/dev/null || echo "0")
has_release_on_complete=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='release_on_complete';" 2>/dev/null || echo "0")
has_diagnostic_of=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='diagnostic_of';" 2>/dev/null || echo "0")
has_issue_url=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='issue_url';" 2>/dev/null || echo "0")
if [[ "$has_max_load" -eq 0 || "$has_release_on_complete" -eq 0 || "$has_diagnostic_of" -eq 0 || "$has_issue_url" -eq 0 ]]; then
needs_alter_migration=true
fi
if [[ "$needs_alter_migration" == "true" ]]; then
backup_db "pre-migrate-alter-columns" >/dev/null 2>&1 || log_warn "Backup failed, proceeding with migrations"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Backup gating misses triage_result column.

If triage_result is the only missing column, ALTER runs without the pre-migration backup. Include it in the “needs_alter_migration” check to keep safety consistent.

🔧 Suggested fix
-    local has_max_load has_release_on_complete has_diagnostic_of has_issue_url
+    local has_max_load has_release_on_complete has_diagnostic_of has_issue_url has_triage_result
     has_max_load=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='max_load_factor';" 2>/dev/null || echo "0")
     has_release_on_complete=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='release_on_complete';" 2>/dev/null || echo "0")
     has_diagnostic_of=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='diagnostic_of';" 2>/dev/null || echo "0")
     has_issue_url=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='issue_url';" 2>/dev/null || echo "0")
-    if [[ "$has_max_load" -eq 0 || "$has_release_on_complete" -eq 0 || "$has_diagnostic_of" -eq 0 || "$has_issue_url" -eq 0 ]]; then
+    has_triage_result=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='triage_result';" 2>/dev/null || echo "0")
+    if [[ "$has_max_load" -eq 0 || "$has_release_on_complete" -eq 0 || "$has_diagnostic_of" -eq 0 || "$has_issue_url" -eq 0 || "$has_triage_result" -eq 0 ]]; then
         needs_alter_migration=true
     fi

As per coding guidelines, Automation scripts - focus on: Reliability and robustness.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Backup before ALTER TABLE migrations if any are needed (t162)
local needs_alter_migration=false
local has_max_load has_release_on_complete has_diagnostic_of has_issue_url
has_max_load=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='max_load_factor';" 2>/dev/null || echo "0")
has_release_on_complete=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='release_on_complete';" 2>/dev/null || echo "0")
has_diagnostic_of=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='diagnostic_of';" 2>/dev/null || echo "0")
has_issue_url=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='issue_url';" 2>/dev/null || echo "0")
if [[ "$has_max_load" -eq 0 || "$has_release_on_complete" -eq 0 || "$has_diagnostic_of" -eq 0 || "$has_issue_url" -eq 0 ]]; then
needs_alter_migration=true
fi
if [[ "$needs_alter_migration" == "true" ]]; then
backup_db "pre-migrate-alter-columns" >/dev/null 2>&1 || log_warn "Backup failed, proceeding with migrations"
fi
# Backup before ALTER TABLE migrations if any are needed (t162)
local needs_alter_migration=false
local has_max_load has_release_on_complete has_diagnostic_of has_issue_url has_triage_result
has_max_load=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='max_load_factor';" 2>/dev/null || echo "0")
has_release_on_complete=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('batches') WHERE name='release_on_complete';" 2>/dev/null || echo "0")
has_diagnostic_of=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='diagnostic_of';" 2>/dev/null || echo "0")
has_issue_url=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='issue_url';" 2>/dev/null || echo "0")
has_triage_result=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM pragma_table_info('tasks') WHERE name='triage_result';" 2>/dev/null || echo "0")
if [[ "$has_max_load" -eq 0 || "$has_release_on_complete" -eq 0 || "$has_diagnostic_of" -eq 0 || "$has_issue_url" -eq 0 || "$has_triage_result" -eq 0 ]]; then
needs_alter_migration=true
fi
if [[ "$needs_alter_migration" == "true" ]]; then
backup_db "pre-migrate-alter-columns" >/dev/null 2>&1 || log_warn "Backup failed, proceeding with migrations"
fi
🤖 Prompt for AI Agents
In @.agents/scripts/supervisor-helper.sh around lines 638 - 650, The migration
backup gating currently checks has_max_load, has_release_on_complete,
has_diagnostic_of and has_issue_url but omits the triage_result column, so add a
new variable (e.g., has_triage_result) populated with the same db(...)
pragma_table_info check against the tasks table for 'triage_result' and include
it in the needs_alter_migration conditional alongside
has_max_load/has_release_on_complete/has_diagnostic_of/has_issue_url so
backup_db("pre-migrate-alter-columns") is invoked whenever triage_result is
missing; use the same patterns and error fallback (|| echo "0") and reference
SUPERVISOR_DB, db, needs_alter_migration, and backup_db when making the change.

- **Commands**: `status|machines|services|deploy|run|scale|logs|exec|inspect|volumes|dns|caddy|help`
- **Usage**: `./.agents/scripts/uncloud-helper.sh [command] [args]`
- **Purpose**: Multi-machine Docker deployments without Kubernetes
- **Install**: `curl -fsSL https://get.uncloud.dev | sh` (Apache 2.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent install pattern: Quick Reference uses curl|sh but Setup section correctly uses download-then-execute.

Line 16 shows curl -fsSL https://get.uncloud.dev | sh (pipe-to-shell), while lines 69-70 correctly use the safer download-then-execute pattern. The project already hardened against curl|sh in t104. Align the Quick Reference with the safe pattern, or at minimum note the safe alternative.

Proposed fix for line 16
-- **Install**: `curl -fsSL https://get.uncloud.dev | sh` (Apache 2.0)
+- **Install**: Download from <https://get.uncloud.dev> (Apache 2.0)

Also applies to: 68-71

🤖 Prompt for AI Agents
In @.agents/tools/deployment/uncloud.md at line 16, Replace the unsafe piped
installer string "curl -fsSL https://get.uncloud.dev | sh" in the Quick
Reference with the safer download-then-execute pattern used in the Setup section
(or at minimum append a clear note showing the safer alternative), ensuring the
Quick Reference entry matches the Setup section's approach; update the text that
currently says the piped command so it either shows the download-then-execute
steps or adds a parenthetical warning and the safer command, keeping the same
meaning but removing the direct "curl | sh" invocation.

Comment on lines 82 to 111
services:
web:
image: my-app:latest
image: nginx:alpine
ports:
- "app.example.com:8000/https"
- "80:80"
deploy:
replicas: 2
update_config:
parallelism: 1
order: start-first
placement:
constraints:
- node.labels.role == web

api:
image: myapp/api:latest
environment:
- DATABASE_URL=postgres://db:5432/app
deploy:
replicas: 3

db:
image: postgres:16
volumes:
- app-data:/data
- pgdata:/var/lib/postgresql/data
deploy:
placement:
constraints:
- node.labels.role == db

volumes:
app-data:
```

### Port Publishing Formats

```yaml
ports:
# HTTPS with custom domain
- "app.example.com:8000/https"
# HTTP only
- "app.example.com:8000/http"
# TCP port (host:container)
- "8080:8000/tcp"
# UDP port
- "53:53/udp"
```

## Helper Script

The `uncloud-helper.sh` script wraps common `uc` CLI operations:

```bash
# Check cluster status
./.agents/scripts/uncloud-helper.sh status

# List machines
./.agents/scripts/uncloud-helper.sh machines

# List services
./.agents/scripts/uncloud-helper.sh services

# Deploy from compose.yaml
./.agents/scripts/uncloud-helper.sh deploy

# Run a service
./.agents/scripts/uncloud-helper.sh run my-app:latest -p app.example.com:8000/https

# View logs
./.agents/scripts/uncloud-helper.sh logs my-service

# Scale a service
./.agents/scripts/uncloud-helper.sh scale my-service 3
pgdata:
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's read the uncloud.md file to understand context around lines 82-111
wc -l .agents/tools/deployment/uncloud.md

Repository: marcusquinn/aidevops

Length of output: 103


🏁 Script executed:

# Read the file with context around the specified lines
cat -n .agents/tools/deployment/uncloud.md | sed -n '70,125p'

Repository: marcusquinn/aidevops

Length of output: 1528


🌐 Web query:

What is Uncloud Docker Swarm deployment constraints support

💡 Result:

Uncloud does not use Docker Swarm’s deploy.placement.constraints / node-label constraint syntax for scheduling. Instead, it supports placement constraints via its own Compose extension:

  • Use x-machines in your compose.yaml to restrict a service to specific cluster machines (by machine name). This is Uncloud’s equivalent of “deployment constraints”. [1]
  • Uncloud explicitly added x-machines placement constraints” support in its Compose enhancements. [2]

Example

services:
  web:
    image: myapp:latest
    x-machines:
      - machine-1
      - machine-2

Sources: [1] [2]


🌐 Web query:

Uncloud docker compose placement constraints node.labels

💡 Result:

In Uncloud, you don’t target machines using Docker Swarm’s deploy.placement.constraints (e.g., node.labels...). Instead, Uncloud provides its own Compose extension x-machines to restrict where a service can run. [1]

services:
  web:
    image: nginx:alpine
    x-machines:
      - machine-1
      - machine-2
    scale: 3

If what you actually want is Docker Swarm placement constraints by node labels, that syntax lives under deploy.placement.constraints in the Compose Deploy spec, and the node label form is node.labels.<key>==<value>: [2][3]

services:
  web:
    image: nginx:alpine
    deploy:
      placement:
        constraints:
          - node.labels.region==east

Also note: deploy: (including placement constraints) is not applied by docker compose up; it’s for orchestrators (like Swarm via docker stack deploy). [4]

Sources:
[1] Uncloud docs — “Deploy to specific machines” (x-machines)
[2] Docker Docs — Compose file deploy.placement.constraints
[3] Docker Docs — Swarm services placement constraints (node.labels...)
[4] Stack Overflow — constraints ignored by docker-compose up (Swarm-only)


🏁 Script executed:

# Check broader context before the example to see if there's any explanation about constraints
cat -n .agents/tools/deployment/uncloud.md | sed -n '50,85p'

Repository: marcusquinn/aidevops

Length of output: 1296


🏁 Script executed:

# Check if there are any other references to placement or x-machines in the file
rg "x-machines|placement|constraints" .agents/tools/deployment/uncloud.md -n

Repository: marcusquinn/aidevops

Length of output: 153


🏁 Script executed:

# Search for any x-machines references across the entire codebase
rg "x-machines" .agents/ -n

Repository: marcusquinn/aidevops

Length of output: 46


Replace deploy.placement.constraints with Uncloud's x-machines extension.

The example uses Docker Swarm syntax (deploy.placement.constraints with node.labels) which Uncloud does not support. Uncloud provides its own x-machines extension to restrict services to specific machines. Replace the deploy blocks with:

services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    x-machines:
      - prod-1

Where prod-1 is a machine added via uc machine add. The deploy section (replicas, placement, constraints) is Docker Swarm-only and will be ignored by Uncloud.

🤖 Prompt for AI Agents
In @.agents/tools/deployment/uncloud.md around lines 82 - 111, The deployment
example uses Docker Swarm fields (deploy, deploy.placement.constraints,
node.labels) which Uncloud ignores; replace the deploy blocks for services
(e.g., services:web, services:api, services:db) with Uncloud's x-machines
extension by removing deploy (including replicas/placement/constraints) and
adding an x-machines list for each service (e.g., x-machines: - prod-1) to
target specific machines added via uc machine add; ensure replicas and
swarm-specific keys are removed since Uncloud uses x-machines for placement.

Comment on lines +25 to +33
| Provider | Type | Latency | Quality | Cost | Notes |
|----------|------|---------|---------|------|-------|
| **OpenAI Realtime** | Cloud | ~300ms | Excellent | $$$ | Most mature Pipecat S2S integration |
| **AWS Nova Sonic** | Cloud | ~400ms | Good | $$ | AWS ecosystem, Bedrock integration |
| **Gemini Multimodal Live** | Cloud | ~350ms | Good | $$ | Multimodal (voice + vision) |
| **Ultravox** | Cloud | ~250ms | Good | $$ | Purpose-built for voice agents |
| **NVIDIA Nemotron** | Cloud/API | ~500ms | Good | $ | Free cloud credits, NVIDIA GPU for local |
| **MiniCPM-o 4.5** | Local | ~1-2s | Good | Free | 9B params, Apache-2.0, full-duplex |
| **HuggingFace S2S** | Local | ~2-3s | Good | Free | Modular pipeline (see `speech-to-speech.md`) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n .agents/tools/voice/cloud-voice-agents.md | head -150

Repository: marcusquinn/aidevops

Length of output: 5339


🏁 Script executed:

# Check if there are any existing "last verified" or "updated" timestamps in the file
rg -i "last\s+(verified|updated|checked)" .agents/tools/voice/cloud-voice-agents.md

# Also check for any date patterns
rg -E "(202[4-9]|20[3-9]\d)" .agents/tools/voice/cloud-voice-agents.md

Repository: marcusquinn/aidevops

Length of output: 121


🏁 Script executed:

# Check the git history of this file to see when it was last modified
git log -1 --format="%ai %s" -- .agents/tools/voice/cloud-voice-agents.md

Repository: marcusquinn/aidevops

Length of output: 145


🏁 Script executed:

# Search for other .agents files to see if they use "last verified" timestamps as a pattern
rg -l "last\s+(verified|updated)" .agents/ | head -10

Repository: marcusquinn/aidevops

Length of output: 46


🏁 Script executed:

# Search for model version references in other parts of the codebase to understand context
rg "gpt-4o-realtime-preview|nova-sonic-v1|gemini-2\.0-flash|ultravox-v0\.5" --type md

# Also check if there are dated references in docs
rg "verified|updated" .agents/ --type md | head -20

Repository: marcusquinn/aidevops

Length of output: 2941


🏁 Script executed:

# Check if this is a common pattern in the .agents directory structure
find .agents -name "*.md" -type f | wc -l
find .agents -name "*.md" -type f -exec grep -l "Last verified\|Last updated\|Updated:" {} \; | wc -l

Repository: marcusquinn/aidevops

Length of output: 69


🏁 Script executed:

# Verify the exact guidance from the repo about timestamps
cat -n .agents/workflows/readme-create-update.md | grep -A 3 -B 3 "Last updated"

Repository: marcusquinn/aidevops

Length of output: 591


🏁 Script executed:

# Check the PLANS.md comment more fully for context
grep -A 5 -B 5 "moving target" todo/PLANS.md

Repository: marcusquinn/aidevops

Length of output: 1118


Remove or generalize hardcoded metrics that will become stale.

Star counts (10.2k for Pipecat at line 17, 23k for MiniCPM-o at line 138) and specific model version names (e.g., gpt-4o-realtime-preview, amazon.nova-sonic-v1:0, gemini-2.0-flash-exp) are moving targets. Per repo policy, do not add static timestamps. Instead:

  • Replace specific star counts with ranges ("~10k", "~20k+") or omit them
  • Link to upstream repositories for current star counts rather than hardcoding
  • Update model version names to reference Pipecat docs links (already present at lines 55, 68, 84, 100) as the source of truth
  • The comparison table's latency ranges (~300ms, ~400ms), cost symbols ($, $$, $$$), and quality descriptions (Good, Excellent) are appropriately qualitative and do not require changes.

Also applies to: 17, 53-55, 68-69, 84-85, 100-101, 116-119, 136-139

🤖 Prompt for AI Agents
In @.agents/tools/voice/cloud-voice-agents.md around lines 25 - 33, The table
currently contains hardcoded, stale metrics and exact model version names (e.g.,
entries for "**OpenAI Realtime**", "**MiniCPM-o 4.5**", "Pipecat" references and
specific model strings like `gpt-4o-realtime-preview`, `amazon.nova-sonic-v1:0`,
`gemini-2.0-flash-exp`) — replace any exact star counts with approximate ranges
(e.g., "~10k", "~20k+") or omit them, remove or generalize exact model-version
strings and instead point readers to the Pipecat/docs links already referenced
elsewhere in the file, and where appropriate replace static repo star counts
with upstream repo links; keep qualitative metrics (latency, cost symbols,
quality) as-is and ensure all modifications are applied to the rows and lines
called out (including the MiniCPM-o and Pipecat mentions).

Comment on lines +58 to +59
- [ ] t158 Fix supervisor dispatch so dynamically-created tasks work with /full-loop #bugfix #supervisor #orchestration ~1h (ai:45m test:15m) ref:GH#573 logged:2026-02-08
- Notes: Already implemented. build_dispatch_cmd passes task description inline in the prompt (` -- description`), full-loop.md Step 0 has 3-tier resolution: inline desc > TODO.md > supervisor DB, full-loop-helper.sh PR creation queries supervisor DB as fallback.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Markdown lint: spaces inside code span (MD038).

Line 59 contains a code span with leading/trailing spaces (` -- description`). This triggers the MD038 markdownlint warning. Trim the spaces or adjust the backtick placement.

🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 59-59: Spaces inside code span elements

(MD038, no-space-in-code)

🤖 Prompt for AI Agents
In `@TODO.md` around lines 58 - 59, The markdown MD038 warning is caused by a code
span with surrounding spaces in the task note (` -- description`); update the
TODO.md entry to remove the leading/trailing space inside the code span so it
reads `-- description` (or move backticks to exclude the spaces), and verify
related mentions in build_dispatch_cmd, full-loop.md Step 0, and
full-loop-helper.sh still match the trimmed token to avoid mismatched
documentation references.

- Notes: Dispatch same review task to multiple models (e.g., Claude + Gemini), collect results, merge/diff findings. Use case: code review, security audit, architecture review. Configurable via `review-models:` in task metadata.
- Notes: OpenCode already supports per-agent model selection (each agent definition can have its own `model:` field across 75+ providers). The Task tool selects a model by invoking a subagent that has that model configured -- not via a model parameter on the call itself. Currently model-routing.md exists as design doc and all 195 subagents have model: frontmatter, but no corresponding agent definitions in opencode.json enforce it. Runner --model only works with single hardcoded provider. No fallback on failure, no availability checking, no quality-based escalation. Provider/model names are a moving target and need periodic reconciliation. This blocks use cases like "get a Gemini review of the codebase from within a Claude session."
- [x] t131 gopass Integration & Credentials Rename #plan #security #credentials → [todo/PLANS.md#2026-02-06-gopass-integration--credentials-rename] ~2d (ai:1d test:4h read:4h) logged:2026-02-06 ref:todo/tasks/prd-gopass-credentials.md completed:2026-02-08
- [ ] t131 gopass Integration & Credentials Rename #plan #security #credentials → [todo/PLANS.md#2026-02-06-gopass-integration--credentials-rename] ~2d (ai:1d test:4h read:4h) logged:2026-02-06 ref:todo/tasks/prd-gopass-credentials.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Duplicate task ID: t131 is used for two different tasks.

Line 212 defines t131 as "gopass Integration & Credentials Rename" while line 541 defines another t131 as "Create tools/vision/ category for visual AI models". The TODO format spec (line 25) states "IDs are stable and never reused." Same issue exists for t132 (line 194 vs line 543).

This will cause ambiguity in any tooling that resolves tasks by ID (supervisor DB, issue-sync-helper, beads-sync).

Also applies to: 541-541

🤖 Prompt for AI Agents
In `@TODO.md` at line 212, Two tasks share duplicate IDs (t131 and t132) which
breaks the TODO spec that IDs are stable; locate the duplicate entries (the
"gopass Integration & Credentials Rename" task and the "Create tools/vision/
category for visual AI models" task referenced as t131, and the two tasks using
t132), assign new unique IDs for the later-occurring tasks, update the task ID
strings in the TODO.md entries and any cross-references (e.g.,
todo/tasks/prd-gopass-credentials.md, todo/PLANS.md anchors, and any
issue-sync-helper or supervisor DB references), and run the repo-wide search for
"t131" and "t132" to replace only the correct occurrences so tooling (supervisor
DB, issue-sync-helper, beads-sync) points to the intended tasks.

@marcusquinn marcusquinn deleted the bugfix/supervisor-db-safety branch February 21, 2026 01:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant