Skip to content

feat: integrate SEO Machine content analysis and writing workflows (t017)#599

Merged
marcusquinn merged 1 commit intomainfrom
feature/t017
Feb 8, 2026
Merged

feat: integrate SEO Machine content analysis and writing workflows (t017)#599
marcusquinn merged 1 commit intomainfrom
feature/t017

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Feb 8, 2026

Summary

  • Port SEO Machine content analysis pipeline to aidevops framework
  • Add content analyzer Python script with readability, keyword density, quality scoring, and search intent analysis
  • Create SEO subagents: content-analyzer, seo-optimizer, keyword-mapper
  • Create content subagents: seo-writer, editor, internal-linker, meta-creator, context-templates
  • Add slash commands: /seo-analyze-content, /seo-optimize, /seo-write

Files Changed (15 new files)

File Purpose
.agents/scripts/seo-content-analyzer.py Python content analysis tool (730 lines)
.agents/seo/content-analyzer.md Content analysis subagent
.agents/seo/seo-optimizer.md On-page SEO audit subagent
.agents/seo/keyword-mapper.md Keyword placement analysis subagent
.agents/content/seo-writer.md SEO-optimized content writing subagent
.agents/content/editor.md Content editing subagent
.agents/content/internal-linker.md Internal linking subagent
.agents/content/meta-creator.md Meta tag creation subagent
.agents/content/context-templates.md Context template library
.agents/scripts/commands/seo-analyze-content.md /seo-analyze-content command
.agents/scripts/commands/seo-optimize.md /seo-optimize command
.agents/scripts/commands/seo-write.md /seo-write command
.agents/seo.md Updated with new subagent entries
.agents/content.md Updated with new subagent entries
.agents/subagent-index.toon Updated index

Summary by CodeRabbit

Release Notes

  • New Features

    • Added specialized content creation subagents for writing, editing, meta creation, and internal linking
    • Introduced comprehensive content analyzer tool with readability, keyword density, and SEO quality scoring
    • Enhanced content workflow with dedicated analysis and optimization steps
    • Added context templates for brand voice, style guides, and SEO guidelines
  • Documentation

    • Expanded SEO integration documentation with keyword research and optimization details
    • Updated content workflow to include research, writing, analysis, optimization, editing, and publishing steps

…017)

Add unified SEO content analyzer (Python) with readability scoring,
keyword density analysis, search intent classification, and SEO quality
rating. Adapted from TheCraigHewitt/seomachine (MIT License).

New subagents:
- content/: seo-writer, meta-creator, editor, internal-linker, context-templates
- seo/: content-analyzer, seo-optimizer, keyword-mapper

New commands: /seo-write, /seo-optimize, /seo-analyze-content

Updated seo.md, content.md, and subagent-index.toon with new capabilities.
@gemini-code-assist
Copy link

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

Walkthrough

This PR implements a comprehensive SEO content creation and analysis system, introducing six new content subagents (seo-writer, meta-creator, editor, internal-linker, context-templates, content-analyzer), a multi-faceted Python analyzer tool for readability and keyword analysis, workflow documentation for SEO operations, and registry updates.

Changes

Cohort / File(s) Summary
Configuration and Registry Updates
.agents/content.md, .agents/seo.md, .agents/subagent-index.toon
Updated main agent configurations to register new subagents (seo-writer, meta-creator, editor, internal-linker, context-templates, content-analyzer, seo-optimizer, keyword-mapper); expanded workflow descriptions and integration points.
Content Subagent Definitions
.agents/content/seo-writer.md, .agents/content/meta-creator.md, .agents/content/editor.md, .agents/content/internal-linker.md, .agents/content/context-templates.md
New content creation subagent definitions with YAML front matter and detailed guidelines for SEO writing, meta tag generation, editorial review, internal linking strategy, and context template management.
SEO Subagent Definitions
.agents/seo/content-analyzer.md, .agents/seo/seo-optimizer.md, .agents/seo/keyword-mapper.md
New SEO analysis and optimization subagent documentation defining analysis workflows, on-page SEO checklists, keyword distribution evaluation, and integration with the content pipeline.
Command/Workflow Documentation
.agents/scripts/commands/seo-write.md, .agents/scripts/commands/seo-optimize.md, .agents/scripts/commands/seo-analyze-content.md
Procedural workflow documentation for SEO-optimized content generation, analysis, and optimization phases with command references and priority-based fix strategies.
SEO Content Analyzer Implementation
.agents/scripts/seo-content-analyzer.py
Comprehensive Python analyzer implementing readability scoring, keyword density analysis, search intent classification, and SEO quality rating with CLI support and structured JSON output; exports ReadabilityScorer, KeywordAnalyzer, SearchIntentAnalyzer, and SEOQualityRater classes.

Sequence Diagram(s)

sequenceDiagram
    participant User as Content User
    participant Writer as SEO Writer<br/>(seo-writer.md)
    participant Analyzer as Content Analyzer<br/>(seo-content-analyzer.py)
    participant Optimizer as SEO Optimizer<br/>(seo-optimizer.md)
    participant Editor as Editor<br/>(editor.md)
    participant CMS as CMS/WordPress<br/>(Publish)

    User->>Writer: Provide keyword targets
    Writer->>Writer: Research & structure
    Writer->>Analyzer: Generate article (2000-3000+ words)
    Analyzer->>Analyzer: Readability score
    Analyzer->>Analyzer: Keyword density check
    Analyzer->>Analyzer: Intent classification
    Analyzer->>Analyzer: SEO quality rating
    Analyzer->>Optimizer: Return analysis report
    Optimizer->>Writer: Address critical issues
    Writer->>Editor: Apply fixes
    Editor->>Editor: Humanize & polish
    Editor->>CMS: Final content ready
    CMS->>User: Published
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly Related Issues

  • t017: SEO Machine Integration for aidevops #509 — Directly implements the SEO Machine plan with identical subagents (seo-writer, meta-creator, editor, internal-linker, content-analyzer, keyword-mapper) and the seo-content-analyzer.py script with all specified analysis modules.

Possibly Related PRs

Poem

🚀 From keywords to prose, the pipeline now flows,
Write, analyze, optimize—each step that it shows,
Content takes flight with readability bright,
SEO precision, editorial vision in sight! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately captures the main objective: integrating SEO Machine content analysis and writing workflows into the framework via the t017 task.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/t017

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Feb 8, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 36 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Feb 8 06:38:43 UTC 2026: Code review monitoring started
Sun Feb 8 06:38:43 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 36

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 36
  • VULNERABILITIES: 0

Generated on: Sun Feb 8 06:38:46 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 8, 2026

@marcusquinn marcusquinn merged commit 8a79ea2 into main Feb 8, 2026
8 of 11 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.agents/subagent-index.toon (1)

82-82: ⚠️ Potential issue | 🟡 Minor

TOON scripts count is off by one.

The header declares scripts[44] but there are now 45 entries (lines 83–127) after adding seo-content-analyzer.py. Update the count to scripts[45].

-<!--TOON:scripts[44]{name,purpose}:
+<!--TOON:scripts[45]{name,purpose}:

Also applies to: 127-127

🤖 Fix all issues with AI agents
In @.agents/content/internal-linker.md:
- Around line 30-55: Extract the "Best Practices" and "Anchor Text Guidelines"
sections from internal-linker.md and consolidate them into one canonical
document titled "Internal Linking Best Practices" (the single-source
authoritative doc), then replace the full sections in internal-linker.md and all
other agent docs that currently duplicate this guidance with a short summary
plus a pointer/link to that canonical doc using a clear anchor (e.g., "Internal
Linking Best Practices"); update any in-repo references that referenced the
removed inline text to point to the canonical anchor and keep only a one-line
progressive-disclosure summary in the original files (e.g., "See Internal
Linking Best Practices for detailed guidance on anchor text, link frequency, and
linking patterns").

In @.agents/content/seo-writer.md:
- Around line 86-95: The example commands use an absolute home path for the
analyzer script; update the command invocations (the lines calling analyze,
readability, keywords, quality) to use the repository-relative script path
(replace "~/.aidevops/agents/scripts/seo-content-analyzer.py" with
"./.agents/scripts/seo-content-analyzer.py" or the correct relative path for
your deployment) so the commands for analyze, readability, keywords and quality
run from the repo root.

In @.agents/scripts/commands/seo-write.md:
- Around line 15-17: Update the command example in seo-write.md to reference the
repository script location instead of the home-directory path: replace the
invocation that points to "~/.aidevops/agents/scripts/seo-content-analyzer.py"
with a relative or repo-root path to ".agents/scripts/seo-content-analyzer.py"
(e.g., "../seo-content-analyzer.py" or
".agents/scripts/seo-content-analyzer.py") so the documented call to run python3
with the "intent" argument matches the actual script name
seo-content-analyzer.py and will execute correctly within the repo context.

In @.agents/scripts/seo-content-analyzer.py:
- Around line 248-252: The secondary_results entries are missing the 'keyword'
key because _analyze_keyword returns a dict without it, so when recommendations
later call s.get('keyword','?') they get '?'; fix by wrapping each secondary
result the same way the primary result is wrapped—when appending to
secondary_results replace appending the raw dict from
self._analyze_keyword(content, kw, ...) with appending {"keyword": kw,
**secondary} (use the same pattern used for the primary result), ensuring every
entry in secondary_results includes the 'keyword' field.
- Around line 682-683: The code assumes parsed["flags"].get("secondary") returns
a string but parse_args can set it to a boolean (True) when the flag is passed
without a value, causing AttributeError on secondary_str.split. Fix by guarding
the value before splitting: check the retrieved secondary_str from
parsed["flags"] (the variable named secondary_str) and only call .split(",")
when isinstance(secondary_str, str); otherwise treat it as empty (so secondary
becomes []). Update the logic that builds secondary (the secondary variable) to
coerce non-string flag values to an empty string before splitting.
- Line 27: The import "Counter" from the collections module is unused; remove
the unused symbol from the imports (delete or modify the line containing "from
collections import Counter") so the script no longer imports Counter
unnecessarily and static analysis warnings stop; alternatively, if you intended
to use Counter, implement its usage in the relevant function instead of leaving
it unused.
- Around line 283-287: The keyword counting is using substring matches
(cl.count(kl)) which inflates counts for short keywords; change _analyze_keyword
to perform word-boundary matching using regex (use re.escape(keyword) with r'\b'
boundaries and re.IGNORECASE or compile once) and compute count from
len(re.findall(...)) so density is correct; apply the same word-boundary regex
approach in _detect_stuffing and in the heading checks (the functions/methods
that inspect headings) to ensure matches only on whole words and remain
case-insensitive, and reuse or cache the compiled pattern for each keyword to
avoid repeated compilation.

In @.agents/seo.md:
- Around line 304-309: The workflow steps in .agents/seo.md (Research → Write →
Analyze → Optimize → Edit → Publish) do not match the actual pipeline
implemented in .agents/scripts/commands/seo-write.md (Research → Context → Write
→ Analyze → Fix → Output); update .agents/seo.md to mirror the real workflow
names and order used in seo-write.md (replace "Write" with "Context → Write",
replace "Optimize" with "Fix", and replace "Publish" with "Output"), or
alternatively update seo-write.md to match the documented sequence—ensure the
step names (Research, Context, Write, Analyze, Fix, Output) are used
consistently across both files and adjust any referenced artifacts (e.g.,
content/seo-writer.md, seo/seo-optimizer.md, content/editor.md) to reflect the
chosen canonical flow.

In @.agents/seo/keyword-mapper.md:
- Around line 85-91: Update the Revision Workflow step 3 wording to match the
density table by changing "Adjust density if outside 1-2% range" to "Adjust
density if outside 1.2–1.8% range" in the "Revision Workflow" section (the list
under the "## Revision Workflow" header) so the density guidance is consistent
with the optimal range defined earlier.

In @.agents/seo/seo-optimizer.md:
- Around line 27-79: Remove the duplicated "On-Page SEO Checklist" in
seo-optimizer.md and replace it with a concise reference to
.agents/content/seo-writer.md's "Content Requirements" (so all meta
title/description, H1/H2, keyword density, word count, and linking specs are
sourced there); then update seo-optimizer.md to focus on optimization-specific
guidance (e.g., improving existing pages, audit/checklist items for competitive
analysis, on-page A/B tests, performance monitoring, and ranking tracking) and
ensure any remaining checklist items are unique to optimization tasks and not
covered by seo-writer.md:Content Requirements.
🧹 Nitpick comments (7)
.agents/scripts/seo-content-analyzer.py (5)

66-71: Code-block stripping regex won't handle inline code or nested backticks.

Line 69: The regex r"```[^`]*```" strips fenced code blocks but misses:

  • Inline code (`code`)
  • Code blocks containing backtick characters inside

For SEO content analysis this is unlikely to be a real problem, but worth noting if accuracy matters on technical articles.

Optional: also strip inline code
 text = re.sub(r"```[^`]*```", "", text)
+text = re.sub(r"`[^`]+`", "", text)

385-397: Annotate class-level signal lists with ClassVar.

Ruff RUF012 flags these mutable class attributes. Since they're constants, annotating them communicates intent and silences the linter.

+from typing import ClassVar
 ...
-    INFO_SIGNALS = [
+    INFO_SIGNALS: ClassVar[List[str]] = [

Apply similarly to NAV_SIGNALS, TRANS_SIGNALS, COMMERCIAL_SIGNALS.


542-545: Readability is permanently hardcoded at +10 in the quality score.

The comment explains readability gets 0.10 weight but isn't computed here. Since the analyze command runs both ReadabilityScorer and SEOQualityRater independently, the quality score always assumes a fixed 10/10 for readability regardless of actual readability results. Consider wiring the actual readability score into rate() for a more accurate composite score, or document this as a known simplification.


462-563: rate() method has high cyclomatic complexity (24) and 86 lines.

Static analysis flags this consistently. The method handles 5 scoring categories inline. To maintain that A-grade zero-tech-debt standard, consider extracting each category into a private method (e.g., _score_content, _score_structure, _score_keywords, _score_meta, _score_links) that each return (score, issues, warnings).


130-159: Passive voice detection is a rough heuristic — document the limitation.

The check for passive indicators (was, were, is, are, etc.) combined with words ending in -ed/-en will produce false positives (e.g., "The team is experienced" flagged as passive). This is acceptable for an SEO heuristic, but a brief inline comment noting the approximation would help future maintainers understand the trade-off vs. installing textstat or a proper NLP library.

.agents/content/editor.md (1)

49-58: Consider reducing pattern list duplication.

Lines 51-56 list specific AI patterns (delve, tapestry, leverage, etc.) but then defer to content/humanise.md for the complete list. This creates potential duplication and maintenance burden.

Consider either:

  1. Removing the example list entirely and only referencing content/humanise.md
  2. Stating "See content/humanise.md for the complete pattern detection list" without examples
♻️ Streamlined approach
 ### 4. Robotic vs Human Patterns
 
-Detect and flag:
-
-- **AI vocabulary**: delve, tapestry, landscape, leverage, utilize, facilitate
-- **Filler phrases**: "It's worth noting that", "In today's digital age"
-- **Rule of three**: Excessive use of three-item lists
-- **Em dash overuse**: More than 2-3 per article
-- **Hedging**: "might", "could potentially", "it's possible that"
-- **Promotional language**: "game-changer", "revolutionary", "cutting-edge"
-
-See `content/humanise.md` for the complete pattern list.
+Detect and flag AI-generated patterns. See `content/humanise.md` for the complete detection patterns and removal strategies.

As per coding guidelines, use file references rather than duplicating content.

.agents/scripts/commands/seo-write.md (1)

28-34: Consider referencing seo-writer.md instead of duplicating guidelines.

Lines 28-34 list writing guidelines that appear to duplicate content from content/seo-writer.md (referenced on line 49). This creates maintenance burden if guidelines change.

♻️ Reference instead of duplicate
-3. **Write**: Follow `content/seo-writer.md` guidelines to create the article:
-   - 2,000-3,000+ words
-   - Primary keyword in H1, first 100 words, 2-3 H2s
-   - 1-2% keyword density
-   - 3-5 internal links, 2-3 external links
-   - Meta title (50-60 chars) and description (150-160 chars)
-   - Grade 8-10 reading level
+3. **Write**: Follow `content/seo-writer.md` guidelines to create the article
+   (see Content Requirements table in seo-writer.md for complete specs)

As per coding guidelines, use file:line references rather than duplicating authoritative content.

Comment on lines +30 to +55
| Type | Purpose | Example |
|------|---------|---------|
| **Contextual** | Natural in-content links | "Learn more about [keyword research](/seo/keyword-research)" |
| **Navigational** | Guide user journey | "Next step: [setting up analytics](/guides/analytics)" |
| **Hub/Spoke** | Connect pillar to cluster | Pillar page links to all subtopic pages |
| **Related** | Cross-reference similar content | "See also: [related topic](/blog/related)" |

### Best Practices

- **3-5 internal links** per 2000-word article
- **Descriptive anchor text** - never "click here" or "read more"
- **Keyword-rich anchors** - use target keyword of destination page
- **Vary anchor text** - don't use identical anchors for same destination
- **Link early** - place important links in first half of content
- **Deep links** - link to specific pages, not just homepage/categories
- **Bidirectional** - if A links to B, consider B linking back to A

### Anchor Text Guidelines

| Do | Don't |
|----|-------|
| "comprehensive keyword research guide" | "click here" |
| "podcast hosting comparison" | "this article" |
| "step-by-step SEO audit process" | "read more" |
| Natural sentence integration | Forced keyword stuffing |

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for potential duplicate internal linking instructions

echo "Searching for duplicate anchor text guidelines..."
rg -i "click here|read more|descriptive anchor" .agents/ -g '*.md' --no-filename -C 1

echo ""
echo "Searching for duplicate link density guidance..."
rg -i "3-5 (internal )?links" .agents/ -g '*.md' --no-filename -C 1

echo ""
echo "Searching for duplicate keyword-rich anchor guidance..."
rg -i "keyword.rich anchor" .agents/ -g '*.md' --no-filename -C 1

echo ""
echo "Searching for bidirectional linking references..."
rg -i "bidirectional" .agents/ -g '*.md' --no-filename -C 1

Repository: marcusquinn/aidevops

Length of output: 3900


Consolidate internal linking guidance - duplicated across multiple .agents/ files.

The Best Practices and Anchor Text Guidelines sections in internal-linker.md duplicate existing instructions scattered throughout .agents/ documentation. The "3-5 internal links," "descriptive anchor text," "click here/read more anti-patterns," and "keyword-rich anchors" guidance appears in multiple files (content checklists, SEO guides, and other agent documentation), violating the single-source-of-truth principle.

Move this internal linking guidance to a single authoritative location and update other files to reference it using progressive disclosure with pointers rather than inline duplication.

🤖 Prompt for AI Agents
In @.agents/content/internal-linker.md around lines 30 - 55, Extract the "Best
Practices" and "Anchor Text Guidelines" sections from internal-linker.md and
consolidate them into one canonical document titled "Internal Linking Best
Practices" (the single-source authoritative doc), then replace the full sections
in internal-linker.md and all other agent docs that currently duplicate this
guidance with a short summary plus a pointer/link to that canonical doc using a
clear anchor (e.g., "Internal Linking Best Practices"); update any in-repo
references that referenced the removed inline text to point to the canonical
anchor and keep only a one-line progressive-disclosure summary in the original
files (e.g., "See Internal Linking Best Practices for detailed guidance on
anchor text, link frequency, and linking patterns").

Comment on lines +86 to +95
# Full analysis
python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py analyze article.md \
--keyword "primary keyword" --secondary "kw1,kw2"

# Individual checks
python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py readability article.md
python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py keywords article.md --keyword "primary keyword"
python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py quality article.md \
--keyword "primary keyword" --meta-title "Title" --meta-desc "Description"
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate and verify seo-content-analyzer.py

echo "=== Searching for seo-content-analyzer.py ==="
if script_path=$(fd -t f "seo-content-analyzer.py" . 2>/dev/null); then
  echo "Found at: $script_path"
  echo ""
  echo "=== Script content (first 150 lines) ==="
  head -150 "$script_path"
  echo ""
  echo "=== Checking for supported commands/functions ==="
  rg -i "(def|command|argparse|subcommand)" "$script_path" -A 2 | head -80
else
  echo "✗ Script not found in repository"
fi

echo ""
echo "=== Also check .agents directory structure ==="
find .agents -name "*analyzer*" -o -name "*seo*" | head -20

Repository: marcusquinn/aidevops

Length of output: 10739


🏁 Script executed:

#!/bin/bash
# Check main() function and command parsing

echo "=== Looking for main function and command parsing ==="
rg -A 40 "^def main" .agents/scripts/seo-content-analyzer.py

echo ""
echo "=== Check for argparse or sys.argv handling ==="
rg -B 5 -A 10 "(argparse|sys.argv\[1\])" .agents/scripts/seo-content-analyzer.py | head -100

echo ""
echo "=== Verify quality command parameter names ==="
rg -A 20 'quality|meta-title|meta-desc|meta_title|meta_description' .agents/scripts/seo-content-analyzer.py | head -50

Repository: marcusquinn/aidevops

Length of output: 3269


🏁 Script executed:

#!/bin/bash
# Check quality command implementation and parse_args function

echo "=== Quality command implementation ==="
rg -A 30 'elif cmd == "quality"' .agents/scripts/seo-content-analyzer.py

echo ""
echo "=== parse_args function to verify flag parsing ==="
rg -B 5 -A 50 "^def parse_args" .agents/scripts/seo-content-analyzer.py

Repository: marcusquinn/aidevops

Length of output: 2807


Update script paths to match repository location.

The commands (analyze, readability, keywords, quality) and their parameters are correctly supported. However, the path prefix ~/.aidevops/agents/scripts/seo-content-analyzer.py should be ./.agents/scripts/seo-content-analyzer.py (relative to repository root) or the appropriate relative path for deployment context.

🤖 Prompt for AI Agents
In @.agents/content/seo-writer.md around lines 86 - 95, The example commands use
an absolute home path for the analyzer script; update the command invocations
(the lines calling analyze, readability, keywords, quality) to use the
repository-relative script path (replace
"~/.aidevops/agents/scripts/seo-content-analyzer.py" with
"./.agents/scripts/seo-content-analyzer.py" or the correct relative path for
your deployment) so the commands for analyze, readability, keywords and quality
run from the repo root.

Comment on lines +15 to +17
```bash
python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py intent "$ARGUMENTS"
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Step 1: Check if seo-write.md exists and view lines 15-17
echo "=== Checking seo-write.md ==="
if fd -t f "seo-write.md" .agents/; then
  echo "✓ File found"
  echo "--- Lines 15-17 ---"
  sed -n '15,17p' "$(fd -t f "seo-write.md" .agents/)" | cat -n
else
  echo "✗ seo-write.md not found"
fi

# Step 2: Check for seo-content-analyzer.py in .agents/
echo ""
echo "=== Checking seo-content-analyzer.py ==="
if fd -t f "seo-content-analyzer.py" .agents/; then
  echo "✓ Script found in .agents/"
  script_file=$(fd -t f "seo-content-analyzer.py" .agents/)
  echo "Location: $script_file"
else
  echo "✗ Script not found in .agents/"
fi

# Step 3: Check the broader repository
echo ""
echo "=== Broader search for seo-content-analyzer.py ==="
fd -t f "seo-content-analyzer.py"

Repository: marcusquinn/aidevops

Length of output: 538


🏁 Script executed:

#!/bin/bash
# Step 1: Check seo-content-analyzer.py structure and available commands
echo "=== seo-content-analyzer.py content ==="
wc -l .agents/scripts/seo-content-analyzer.py

# Step 2: Check for 'intent' command in the script
echo ""
echo "=== Searching for 'intent' command ==="
grep -n "intent" .agents/scripts/seo-content-analyzer.py | head -20

# Step 3: Check if script has argparse or command structure
echo ""
echo "=== Script structure (first 50 lines) ==="
head -50 .agents/scripts/seo-content-analyzer.py

Repository: marcusquinn/aidevops

Length of output: 2471


Fix script path reference to match repository structure.

The script IS included in this PR at .agents/scripts/seo-content-analyzer.py, and the 'intent' command is properly implemented. However, the command in seo-write.md (line 16) references ~/.aidevops/agents/scripts/seo-content-analyzer.py, which doesn't align with the actual repository structure at .agents/scripts/. This path should either use a relative path (e.g., ../seo-content-analyzer.py) or a consistent reference to the actual location if deploying within the repository context. The current path reference will cause execution failures unless ~/.aidevops is explicitly configured as a deployment target.

🤖 Prompt for AI Agents
In @.agents/scripts/commands/seo-write.md around lines 15 - 17, Update the
command example in seo-write.md to reference the repository script location
instead of the home-directory path: replace the invocation that points to
"~/.aidevops/agents/scripts/seo-content-analyzer.py" with a relative or
repo-root path to ".agents/scripts/seo-content-analyzer.py" (e.g.,
"../seo-content-analyzer.py" or ".agents/scripts/seo-content-analyzer.py") so
the documented call to run python3 with the "intent" argument matches the actual
script name seo-content-analyzer.py and will execute correctly within the repo
context.

import re
import json
import os
from collections import Counter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused Counter import.

Flagged by static analysis — Counter from collections is imported but never used anywhere in the script.

-from collections import Counter
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from collections import Counter
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis

[warning] 27-27: .agents/scripts/seo-content-analyzer.py#L27
'collections.Counter' imported but unused (F401)


[warning] 27-27: .agents/scripts/seo-content-analyzer.py#L27
Unused Counter imported from collections

🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py at line 27, The import "Counter"
from the collections module is unused; remove the unused symbol from the imports
(delete or modify the line containing "from collections import Counter") so the
script no longer imports Counter unnecessarily and static analysis warnings
stop; alternatively, if you intended to use Counter, implement its usage in the
relevant function instead of leaving it unused.

Comment on lines +248 to +252
secondary_results = []
for kw in secondary_keywords:
secondary_results.append(
self._analyze_keyword(content, kw, word_count, sections, target_density * 0.5)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bug: secondary keyword name always shows '?' in recommendations.

_analyze_keyword returns a dict without a 'keyword' key. On line 258, you wrap the primary result with {"keyword": primary_keyword, **primary}, but secondary results on line 250 are appended raw. So s.get('keyword', '?') on line 374 always falls back to '?'.

Fix: include keyword name in secondary results
         secondary_results = []
         for kw in secondary_keywords:
-            secondary_results.append(
-                self._analyze_keyword(content, kw, word_count, sections, target_density * 0.5)
-            )
+            result = self._analyze_keyword(content, kw, word_count, sections, target_density * 0.5)
+            secondary_results.append({"keyword": kw, **result})

Also applies to: 372-374

🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py around lines 248 - 252, The
secondary_results entries are missing the 'keyword' key because _analyze_keyword
returns a dict without it, so when recommendations later call
s.get('keyword','?') they get '?'; fix by wrapping each secondary result the
same way the primary result is wrapped—when appending to secondary_results
replace appending the raw dict from self._analyze_keyword(content, kw, ...) with
appending {"keyword": kw, **secondary} (use the same pattern used for the
primary result), ensuring every entry in secondary_results includes the
'keyword' field.

Comment on lines +283 to +287
def _analyze_keyword(self, content, keyword, word_count, sections, target) -> Dict[str, Any]:
cl = content.lower()
kl = keyword.lower()
count = cl.count(kl)
density = (count / word_count * 100) if word_count > 0 else 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keyword counting uses substring match — inflates density for short keywords.

cl.count(kl) on line 286 is a plain substring search. A keyword like "art" would match inside "start", "article", "smart", etc., inflating the density count and potentially triggering false stuffing warnings.

Consider word-boundary matching:

Proposed fix using regex word boundaries
-        cl = content.lower()
-        kl = keyword.lower()
-        count = cl.count(kl)
+        kl = keyword.lower()
+        pattern = r'\b' + re.escape(kl) + r'\b'
+        count = len(re.findall(pattern, content.lower()))

The same substring issue exists in _detect_stuffing (line 342) and heading checks (lines 296, 300) — apply consistently.

🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis

[warning] 283-283: .agents/scripts/seo-content-analyzer.py#L283
Method _analyze_keyword has a cyclomatic complexity of 12 (limit is 8)

🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py around lines 283 - 287, The keyword
counting is using substring matches (cl.count(kl)) which inflates counts for
short keywords; change _analyze_keyword to perform word-boundary matching using
regex (use re.escape(keyword) with r'\b' boundaries and re.IGNORECASE or compile
once) and compute count from len(re.findall(...)) so density is correct; apply
the same word-boundary regex approach in _detect_stuffing and in the heading
checks (the functions/methods that inspect headings) to ensure matches only on
whole words and remain case-insensitive, and reuse or cache the compiled pattern
for each keyword to avoid repeated compilation.

Comment on lines +682 to +683
secondary_str = parsed["flags"].get("secondary", "")
secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--secondary without a value causes AttributeError.

If a user passes --secondary --keyword "foo", parse_args sets secondary to True (boolean). Line 683 then calls True.split(",") which raises AttributeError.

Guard against non-string flag values
     secondary_str = parsed["flags"].get("secondary", "")
+    if not isinstance(secondary_str, str):
+        secondary_str = ""
     secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else []
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
secondary_str = parsed["flags"].get("secondary", "")
secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else []
secondary_str = parsed["flags"].get("secondary", "")
if not isinstance(secondary_str, str):
secondary_str = ""
secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else []
🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py around lines 682 - 683, The code
assumes parsed["flags"].get("secondary") returns a string but parse_args can set
it to a boolean (True) when the flag is passed without a value, causing
AttributeError on secondary_str.split. Fix by guarding the value before
splitting: check the retrieved secondary_str from parsed["flags"] (the variable
named secondary_str) and only call .split(",") when isinstance(secondary_str,
str); otherwise treat it as empty (so secondary becomes []). Update the logic
that builds secondary (the secondary variable) to coerce non-string flag values
to an empty string before splitting.

Comment on lines +304 to +309
1. **Research**: `/keyword-research` + `/autocomplete-research`
2. **Write**: Use `content/seo-writer.md` with keyword targets
3. **Analyze**: `seo-content-analyzer.py analyze` for quality score
4. **Optimize**: Address issues from `seo/seo-optimizer.md`
5. **Edit**: `content/editor.md` for human voice
6. **Publish**: Via WordPress or CMS
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

echo "=== Step 1: Check if seo.md exists and examine lines 304-309 ==="
if [ -f ".agents/seo.md" ]; then
  echo "seo.md found. Showing lines 304-309:"
  sed -n '304,309p' .agents/seo.md
else
  echo "seo.md not found"
fi

echo ""
echo "=== Step 2: Check if seo-write.md exists ==="
if [ -f ".agents/scripts/commands/seo-write.md" ]; then
  echo "seo-write.md found"
elif [ -f ".agents/seo-write.md" ]; then
  echo "seo-write.md found at .agents/seo-write.md"
else
  echo "Searching for seo-write.md..."
  find .agents -name "*seo-write*" -o -name "*seo*write*"
fi

echo ""
echo "=== Step 3: Search for 6-step workflow references ==="
rg -i "research.*write.*analyze.*optimize.*edit.*publish|Research.*Write.*Analyze.*Optimize.*Edit.*Publish" .agents/ --max-count 10

echo ""
echo "=== Step 4: Search for numbered workflow steps ==="
rg "^\d+\.\s+\*\*Research\*\*|^\d+\.\s+\*\*Write\*\*" .agents/ -g '*.md' -C 1

echo ""
echo "=== Step 5: List all markdown files in .agents/ ==="
find .agents -name "*.md" -type f | head -20

echo ""
echo "=== Step 6: Check for duplicate instruction patterns ==="
rg "Content analysis workflow|workflow.*steps|6.*step.*workflow" .agents/ -g '*.md'

Repository: marcusquinn/aidevops

Length of output: 2822


🏁 Script executed:

#!/bin/bash

echo "=== Reading full seo-write.md to check workflow steps ==="
cat -n .agents/scripts/commands/seo-write.md | head -100

echo ""
echo "=== Checking for all 6 steps in seo-write.md ==="
rg "Research|Write|Analyze|Optimize|Edit|Publish" .agents/scripts/commands/seo-write.md -n

Repository: marcusquinn/aidevops

Length of output: 2383


Align workflow definitions between seo.md and seo-write.md.

The Content analysis workflow at .agents/seo.md:304-309 (Research → Write → Analyze → Optimize → Edit → Publish) differs from the actual workflow in .agents/scripts/commands/seo-write.md:11-46 (Research → Context → Write → Analyze → Fix → Output). Align these workflows to ensure consistent documentation and prevent confusion across the automation framework.

🤖 Prompt for AI Agents
In @.agents/seo.md around lines 304 - 309, The workflow steps in .agents/seo.md
(Research → Write → Analyze → Optimize → Edit → Publish) do not match the actual
pipeline implemented in .agents/scripts/commands/seo-write.md (Research →
Context → Write → Analyze → Fix → Output); update .agents/seo.md to mirror the
real workflow names and order used in seo-write.md (replace "Write" with
"Context → Write", replace "Optimize" with "Fix", and replace "Publish" with
"Output"), or alternatively update seo-write.md to match the documented
sequence—ensure the step names (Research, Context, Write, Analyze, Fix, Output)
are used consistently across both files and adjust any referenced artifacts
(e.g., content/seo-writer.md, seo/seo-optimizer.md, content/editor.md) to
reflect the chosen canonical flow.

Comment on lines +85 to +91
## Revision Workflow

1. Run analysis
2. Fix critical placement gaps first
3. Adjust density if outside 1-2% range
4. Add LSI keywords for topical coverage
5. Re-run analysis to verify
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor density range inconsistency in revision workflow.

Step 3 says "Adjust density if outside 1-2% range" but the density table above (lines 43–47) defines optimal as 1.2–1.8%. Consider aligning: "outside 1.2–1.8% range".

🤖 Prompt for AI Agents
In @.agents/seo/keyword-mapper.md around lines 85 - 91, Update the Revision
Workflow step 3 wording to match the density table by changing "Adjust density
if outside 1-2% range" to "Adjust density if outside 1.2–1.8% range" in the
"Revision Workflow" section (the list under the "## Revision Workflow" header)
so the density guidance is consistent with the optimal range defined earlier.

Comment on lines +27 to +79
## On-Page SEO Checklist

### Title Tag

- [ ] Contains primary keyword (preferably near start)
- [ ] 50-60 characters
- [ ] Unique across site
- [ ] Compelling for click-through

### Meta Description

- [ ] Contains primary keyword
- [ ] 150-160 characters
- [ ] Includes call-to-action
- [ ] Unique and descriptive

### Headings

- [ ] Single H1 with primary keyword
- [ ] 4-6 H2 sections minimum
- [ ] 2-3 H2s contain keyword or variation
- [ ] Proper hierarchy (H1 > H2 > H3)
- [ ] Descriptive, not generic

### Content

- [ ] Primary keyword in first 100 words
- [ ] Keyword density 1-2%
- [ ] 2000+ words (check competitor benchmark)
- [ ] Natural keyword integration
- [ ] Secondary keywords included
- [ ] Answers search intent

### Links

- [ ] 3-5 internal links with descriptive anchors
- [ ] 2-3 external links to authority sources
- [ ] No broken links
- [ ] Links open appropriately (internal: same tab, external: new tab)

### Media

- [ ] Images have descriptive alt text with keyword where natural
- [ ] Images are compressed and properly sized
- [ ] At least 1 image per 500 words

### Technical

- [ ] URL contains keyword (short, descriptive)
- [ ] Schema markup where applicable
- [ ] Mobile-friendly layout
- [ ] Fast page load time

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, find these files and check their existence
echo "=== Locating files ==="
fd -e md "seo-optimizer|seo-writer" .agents/

echo ""
echo "=== Checking seo-optimizer.md content (lines 27-79) ==="
if [ -f ".agents/seo/seo-optimizer.md" ]; then
  sed -n '27,79p' .agents/seo/seo-optimizer.md
fi

echo ""
echo "=== Full seo-writer.md content ==="
if [ -f ".agents/content/seo-writer.md" ]; then
  cat .agents/content/seo-writer.md
fi

Repository: marcusquinn/aidevops

Length of output: 5318


Remove or reference the On-Page SEO Checklist to avoid duplication with seo-writer.md.

The checklist items (lines 27-79) substantially duplicate the "Content Requirements" table in .agents/content/seo-writer.md section 3—including identical specifications for:

  • Meta title (50-60 chars with keyword)
  • Meta description (150-160 chars with keyword)
  • Keyword density (1-2%)
  • H1 and H2 keyword placement
  • Word count (2000+ words)
  • Internal/external link counts and anchor text

Instead of repeating these requirements, reference seo-writer.md:Content Requirements and reserve seo-optimizer.md for optimization-specific guidance (e.g., improving existing pages, competitive analysis, ranking monitoring).

🤖 Prompt for AI Agents
In @.agents/seo/seo-optimizer.md around lines 27 - 79, Remove the duplicated
"On-Page SEO Checklist" in seo-optimizer.md and replace it with a concise
reference to .agents/content/seo-writer.md's "Content Requirements" (so all meta
title/description, H1/H2, keyword density, word count, and linking specs are
sourced there); then update seo-optimizer.md to focus on optimization-specific
guidance (e.g., improving existing pages, audit/checklist items for competitive
analysis, on-page A/B tests, performance monitoring, and ranking tracking) and
ensure any remaining checklist items are unique to optimization tasks and not
covered by seo-writer.md:Content Requirements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant