feat: integrate SEO Machine content analysis and writing workflows (t017)#599
feat: integrate SEO Machine content analysis and writing workflows (t017)#599marcusquinn merged 1 commit intomainfrom
Conversation
…017) Add unified SEO content analyzer (Python) with readability scoring, keyword density analysis, search intent classification, and SEO quality rating. Adapted from TheCraigHewitt/seomachine (MIT License). New subagents: - content/: seo-writer, meta-creator, editor, internal-linker, context-templates - seo/: content-analyzer, seo-optimizer, keyword-mapper New commands: /seo-write, /seo-optimize, /seo-analyze-content Updated seo.md, content.md, and subagent-index.toon with new capabilities.
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
WalkthroughThis PR implements a comprehensive SEO content creation and analysis system, introducing six new content subagents (seo-writer, meta-creator, editor, internal-linker, context-templates, content-analyzer), a multi-faceted Python analyzer tool for readability and keyword analysis, workflow documentation for SEO operations, and registry updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Content User
participant Writer as SEO Writer<br/>(seo-writer.md)
participant Analyzer as Content Analyzer<br/>(seo-content-analyzer.py)
participant Optimizer as SEO Optimizer<br/>(seo-optimizer.md)
participant Editor as Editor<br/>(editor.md)
participant CMS as CMS/WordPress<br/>(Publish)
User->>Writer: Provide keyword targets
Writer->>Writer: Research & structure
Writer->>Analyzer: Generate article (2000-3000+ words)
Analyzer->>Analyzer: Readability score
Analyzer->>Analyzer: Keyword density check
Analyzer->>Analyzer: Intent classification
Analyzer->>Analyzer: SEO quality rating
Analyzer->>Optimizer: Return analysis report
Optimizer->>Writer: Address critical issues
Writer->>Editor: Apply fixes
Editor->>Editor: Humanize & polish
Editor->>CMS: Final content ready
CMS->>User: Published
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly Related Issues
Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Feb 8 06:38:46 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.agents/subagent-index.toon (1)
82-82:⚠️ Potential issue | 🟡 MinorTOON scripts count is off by one.
The header declares
scripts[44]but there are now 45 entries (lines 83–127) after addingseo-content-analyzer.py. Update the count toscripts[45].-<!--TOON:scripts[44]{name,purpose}: +<!--TOON:scripts[45]{name,purpose}:Also applies to: 127-127
🤖 Fix all issues with AI agents
In @.agents/content/internal-linker.md:
- Around line 30-55: Extract the "Best Practices" and "Anchor Text Guidelines"
sections from internal-linker.md and consolidate them into one canonical
document titled "Internal Linking Best Practices" (the single-source
authoritative doc), then replace the full sections in internal-linker.md and all
other agent docs that currently duplicate this guidance with a short summary
plus a pointer/link to that canonical doc using a clear anchor (e.g., "Internal
Linking Best Practices"); update any in-repo references that referenced the
removed inline text to point to the canonical anchor and keep only a one-line
progressive-disclosure summary in the original files (e.g., "See Internal
Linking Best Practices for detailed guidance on anchor text, link frequency, and
linking patterns").
In @.agents/content/seo-writer.md:
- Around line 86-95: The example commands use an absolute home path for the
analyzer script; update the command invocations (the lines calling analyze,
readability, keywords, quality) to use the repository-relative script path
(replace "~/.aidevops/agents/scripts/seo-content-analyzer.py" with
"./.agents/scripts/seo-content-analyzer.py" or the correct relative path for
your deployment) so the commands for analyze, readability, keywords and quality
run from the repo root.
In @.agents/scripts/commands/seo-write.md:
- Around line 15-17: Update the command example in seo-write.md to reference the
repository script location instead of the home-directory path: replace the
invocation that points to "~/.aidevops/agents/scripts/seo-content-analyzer.py"
with a relative or repo-root path to ".agents/scripts/seo-content-analyzer.py"
(e.g., "../seo-content-analyzer.py" or
".agents/scripts/seo-content-analyzer.py") so the documented call to run python3
with the "intent" argument matches the actual script name
seo-content-analyzer.py and will execute correctly within the repo context.
In @.agents/scripts/seo-content-analyzer.py:
- Around line 248-252: The secondary_results entries are missing the 'keyword'
key because _analyze_keyword returns a dict without it, so when recommendations
later call s.get('keyword','?') they get '?'; fix by wrapping each secondary
result the same way the primary result is wrapped—when appending to
secondary_results replace appending the raw dict from
self._analyze_keyword(content, kw, ...) with appending {"keyword": kw,
**secondary} (use the same pattern used for the primary result), ensuring every
entry in secondary_results includes the 'keyword' field.
- Around line 682-683: The code assumes parsed["flags"].get("secondary") returns
a string but parse_args can set it to a boolean (True) when the flag is passed
without a value, causing AttributeError on secondary_str.split. Fix by guarding
the value before splitting: check the retrieved secondary_str from
parsed["flags"] (the variable named secondary_str) and only call .split(",")
when isinstance(secondary_str, str); otherwise treat it as empty (so secondary
becomes []). Update the logic that builds secondary (the secondary variable) to
coerce non-string flag values to an empty string before splitting.
- Line 27: The import "Counter" from the collections module is unused; remove
the unused symbol from the imports (delete or modify the line containing "from
collections import Counter") so the script no longer imports Counter
unnecessarily and static analysis warnings stop; alternatively, if you intended
to use Counter, implement its usage in the relevant function instead of leaving
it unused.
- Around line 283-287: The keyword counting is using substring matches
(cl.count(kl)) which inflates counts for short keywords; change _analyze_keyword
to perform word-boundary matching using regex (use re.escape(keyword) with r'\b'
boundaries and re.IGNORECASE or compile once) and compute count from
len(re.findall(...)) so density is correct; apply the same word-boundary regex
approach in _detect_stuffing and in the heading checks (the functions/methods
that inspect headings) to ensure matches only on whole words and remain
case-insensitive, and reuse or cache the compiled pattern for each keyword to
avoid repeated compilation.
In @.agents/seo.md:
- Around line 304-309: The workflow steps in .agents/seo.md (Research → Write →
Analyze → Optimize → Edit → Publish) do not match the actual pipeline
implemented in .agents/scripts/commands/seo-write.md (Research → Context → Write
→ Analyze → Fix → Output); update .agents/seo.md to mirror the real workflow
names and order used in seo-write.md (replace "Write" with "Context → Write",
replace "Optimize" with "Fix", and replace "Publish" with "Output"), or
alternatively update seo-write.md to match the documented sequence—ensure the
step names (Research, Context, Write, Analyze, Fix, Output) are used
consistently across both files and adjust any referenced artifacts (e.g.,
content/seo-writer.md, seo/seo-optimizer.md, content/editor.md) to reflect the
chosen canonical flow.
In @.agents/seo/keyword-mapper.md:
- Around line 85-91: Update the Revision Workflow step 3 wording to match the
density table by changing "Adjust density if outside 1-2% range" to "Adjust
density if outside 1.2–1.8% range" in the "Revision Workflow" section (the list
under the "## Revision Workflow" header) so the density guidance is consistent
with the optimal range defined earlier.
In @.agents/seo/seo-optimizer.md:
- Around line 27-79: Remove the duplicated "On-Page SEO Checklist" in
seo-optimizer.md and replace it with a concise reference to
.agents/content/seo-writer.md's "Content Requirements" (so all meta
title/description, H1/H2, keyword density, word count, and linking specs are
sourced there); then update seo-optimizer.md to focus on optimization-specific
guidance (e.g., improving existing pages, audit/checklist items for competitive
analysis, on-page A/B tests, performance monitoring, and ranking tracking) and
ensure any remaining checklist items are unique to optimization tasks and not
covered by seo-writer.md:Content Requirements.
🧹 Nitpick comments (7)
.agents/scripts/seo-content-analyzer.py (5)
66-71: Code-block stripping regex won't handle inline code or nested backticks.Line 69: The regex
r"```[^`]*```"strips fenced code blocks but misses:
- Inline code (
`code`)- Code blocks containing backtick characters inside
For SEO content analysis this is unlikely to be a real problem, but worth noting if accuracy matters on technical articles.
Optional: also strip inline code
text = re.sub(r"```[^`]*```", "", text) +text = re.sub(r"`[^`]+`", "", text)
385-397: Annotate class-level signal lists withClassVar.Ruff RUF012 flags these mutable class attributes. Since they're constants, annotating them communicates intent and silences the linter.
+from typing import ClassVar ... - INFO_SIGNALS = [ + INFO_SIGNALS: ClassVar[List[str]] = [Apply similarly to
NAV_SIGNALS,TRANS_SIGNALS,COMMERCIAL_SIGNALS.
542-545: Readability is permanently hardcoded at +10 in the quality score.The comment explains readability gets 0.10 weight but isn't computed here. Since the
analyzecommand runs bothReadabilityScorerandSEOQualityRaterindependently, the quality score always assumes a fixed 10/10 for readability regardless of actual readability results. Consider wiring the actual readability score intorate()for a more accurate composite score, or document this as a known simplification.
462-563:rate()method has high cyclomatic complexity (24) and 86 lines.Static analysis flags this consistently. The method handles 5 scoring categories inline. To maintain that A-grade zero-tech-debt standard, consider extracting each category into a private method (e.g.,
_score_content,_score_structure,_score_keywords,_score_meta,_score_links) that each return(score, issues, warnings).
130-159: Passive voice detection is a rough heuristic — document the limitation.The check for passive indicators (
was,were,is,are, etc.) combined with words ending in-ed/-enwill produce false positives (e.g., "The team is experienced" flagged as passive). This is acceptable for an SEO heuristic, but a brief inline comment noting the approximation would help future maintainers understand the trade-off vs. installingtextstator a proper NLP library..agents/content/editor.md (1)
49-58: Consider reducing pattern list duplication.Lines 51-56 list specific AI patterns (delve, tapestry, leverage, etc.) but then defer to
content/humanise.mdfor the complete list. This creates potential duplication and maintenance burden.Consider either:
- Removing the example list entirely and only referencing
content/humanise.md- Stating "See
content/humanise.mdfor the complete pattern detection list" without examples♻️ Streamlined approach
### 4. Robotic vs Human Patterns -Detect and flag: - -- **AI vocabulary**: delve, tapestry, landscape, leverage, utilize, facilitate -- **Filler phrases**: "It's worth noting that", "In today's digital age" -- **Rule of three**: Excessive use of three-item lists -- **Em dash overuse**: More than 2-3 per article -- **Hedging**: "might", "could potentially", "it's possible that" -- **Promotional language**: "game-changer", "revolutionary", "cutting-edge" - -See `content/humanise.md` for the complete pattern list. +Detect and flag AI-generated patterns. See `content/humanise.md` for the complete detection patterns and removal strategies.As per coding guidelines, use file references rather than duplicating content.
.agents/scripts/commands/seo-write.md (1)
28-34: Consider referencing seo-writer.md instead of duplicating guidelines.Lines 28-34 list writing guidelines that appear to duplicate content from
content/seo-writer.md(referenced on line 49). This creates maintenance burden if guidelines change.♻️ Reference instead of duplicate
-3. **Write**: Follow `content/seo-writer.md` guidelines to create the article: - - 2,000-3,000+ words - - Primary keyword in H1, first 100 words, 2-3 H2s - - 1-2% keyword density - - 3-5 internal links, 2-3 external links - - Meta title (50-60 chars) and description (150-160 chars) - - Grade 8-10 reading level +3. **Write**: Follow `content/seo-writer.md` guidelines to create the article + (see Content Requirements table in seo-writer.md for complete specs)As per coding guidelines, use file:line references rather than duplicating authoritative content.
| | Type | Purpose | Example | | ||
| |------|---------|---------| | ||
| | **Contextual** | Natural in-content links | "Learn more about [keyword research](/seo/keyword-research)" | | ||
| | **Navigational** | Guide user journey | "Next step: [setting up analytics](/guides/analytics)" | | ||
| | **Hub/Spoke** | Connect pillar to cluster | Pillar page links to all subtopic pages | | ||
| | **Related** | Cross-reference similar content | "See also: [related topic](/blog/related)" | | ||
|
|
||
| ### Best Practices | ||
|
|
||
| - **3-5 internal links** per 2000-word article | ||
| - **Descriptive anchor text** - never "click here" or "read more" | ||
| - **Keyword-rich anchors** - use target keyword of destination page | ||
| - **Vary anchor text** - don't use identical anchors for same destination | ||
| - **Link early** - place important links in first half of content | ||
| - **Deep links** - link to specific pages, not just homepage/categories | ||
| - **Bidirectional** - if A links to B, consider B linking back to A | ||
|
|
||
| ### Anchor Text Guidelines | ||
|
|
||
| | Do | Don't | | ||
| |----|-------| | ||
| | "comprehensive keyword research guide" | "click here" | | ||
| | "podcast hosting comparison" | "this article" | | ||
| | "step-by-step SEO audit process" | "read more" | | ||
| | Natural sentence integration | Forced keyword stuffing | | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for potential duplicate internal linking instructions
echo "Searching for duplicate anchor text guidelines..."
rg -i "click here|read more|descriptive anchor" .agents/ -g '*.md' --no-filename -C 1
echo ""
echo "Searching for duplicate link density guidance..."
rg -i "3-5 (internal )?links" .agents/ -g '*.md' --no-filename -C 1
echo ""
echo "Searching for duplicate keyword-rich anchor guidance..."
rg -i "keyword.rich anchor" .agents/ -g '*.md' --no-filename -C 1
echo ""
echo "Searching for bidirectional linking references..."
rg -i "bidirectional" .agents/ -g '*.md' --no-filename -C 1Repository: marcusquinn/aidevops
Length of output: 3900
Consolidate internal linking guidance - duplicated across multiple .agents/ files.
The Best Practices and Anchor Text Guidelines sections in internal-linker.md duplicate existing instructions scattered throughout .agents/ documentation. The "3-5 internal links," "descriptive anchor text," "click here/read more anti-patterns," and "keyword-rich anchors" guidance appears in multiple files (content checklists, SEO guides, and other agent documentation), violating the single-source-of-truth principle.
Move this internal linking guidance to a single authoritative location and update other files to reference it using progressive disclosure with pointers rather than inline duplication.
🤖 Prompt for AI Agents
In @.agents/content/internal-linker.md around lines 30 - 55, Extract the "Best
Practices" and "Anchor Text Guidelines" sections from internal-linker.md and
consolidate them into one canonical document titled "Internal Linking Best
Practices" (the single-source authoritative doc), then replace the full sections
in internal-linker.md and all other agent docs that currently duplicate this
guidance with a short summary plus a pointer/link to that canonical doc using a
clear anchor (e.g., "Internal Linking Best Practices"); update any in-repo
references that referenced the removed inline text to point to the canonical
anchor and keep only a one-line progressive-disclosure summary in the original
files (e.g., "See Internal Linking Best Practices for detailed guidance on
anchor text, link frequency, and linking patterns").
| # Full analysis | ||
| python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py analyze article.md \ | ||
| --keyword "primary keyword" --secondary "kw1,kw2" | ||
|
|
||
| # Individual checks | ||
| python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py readability article.md | ||
| python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py keywords article.md --keyword "primary keyword" | ||
| python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py quality article.md \ | ||
| --keyword "primary keyword" --meta-title "Title" --meta-desc "Description" | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate and verify seo-content-analyzer.py
echo "=== Searching for seo-content-analyzer.py ==="
if script_path=$(fd -t f "seo-content-analyzer.py" . 2>/dev/null); then
echo "Found at: $script_path"
echo ""
echo "=== Script content (first 150 lines) ==="
head -150 "$script_path"
echo ""
echo "=== Checking for supported commands/functions ==="
rg -i "(def|command|argparse|subcommand)" "$script_path" -A 2 | head -80
else
echo "✗ Script not found in repository"
fi
echo ""
echo "=== Also check .agents directory structure ==="
find .agents -name "*analyzer*" -o -name "*seo*" | head -20Repository: marcusquinn/aidevops
Length of output: 10739
🏁 Script executed:
#!/bin/bash
# Check main() function and command parsing
echo "=== Looking for main function and command parsing ==="
rg -A 40 "^def main" .agents/scripts/seo-content-analyzer.py
echo ""
echo "=== Check for argparse or sys.argv handling ==="
rg -B 5 -A 10 "(argparse|sys.argv\[1\])" .agents/scripts/seo-content-analyzer.py | head -100
echo ""
echo "=== Verify quality command parameter names ==="
rg -A 20 'quality|meta-title|meta-desc|meta_title|meta_description' .agents/scripts/seo-content-analyzer.py | head -50Repository: marcusquinn/aidevops
Length of output: 3269
🏁 Script executed:
#!/bin/bash
# Check quality command implementation and parse_args function
echo "=== Quality command implementation ==="
rg -A 30 'elif cmd == "quality"' .agents/scripts/seo-content-analyzer.py
echo ""
echo "=== parse_args function to verify flag parsing ==="
rg -B 5 -A 50 "^def parse_args" .agents/scripts/seo-content-analyzer.pyRepository: marcusquinn/aidevops
Length of output: 2807
Update script paths to match repository location.
The commands (analyze, readability, keywords, quality) and their parameters are correctly supported. However, the path prefix ~/.aidevops/agents/scripts/seo-content-analyzer.py should be ./.agents/scripts/seo-content-analyzer.py (relative to repository root) or the appropriate relative path for deployment context.
🤖 Prompt for AI Agents
In @.agents/content/seo-writer.md around lines 86 - 95, The example commands use
an absolute home path for the analyzer script; update the command invocations
(the lines calling analyze, readability, keywords, quality) to use the
repository-relative script path (replace
"~/.aidevops/agents/scripts/seo-content-analyzer.py" with
"./.agents/scripts/seo-content-analyzer.py" or the correct relative path for
your deployment) so the commands for analyze, readability, keywords and quality
run from the repo root.
| ```bash | ||
| python3 ~/.aidevops/agents/scripts/seo-content-analyzer.py intent "$ARGUMENTS" | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Step 1: Check if seo-write.md exists and view lines 15-17
echo "=== Checking seo-write.md ==="
if fd -t f "seo-write.md" .agents/; then
echo "✓ File found"
echo "--- Lines 15-17 ---"
sed -n '15,17p' "$(fd -t f "seo-write.md" .agents/)" | cat -n
else
echo "✗ seo-write.md not found"
fi
# Step 2: Check for seo-content-analyzer.py in .agents/
echo ""
echo "=== Checking seo-content-analyzer.py ==="
if fd -t f "seo-content-analyzer.py" .agents/; then
echo "✓ Script found in .agents/"
script_file=$(fd -t f "seo-content-analyzer.py" .agents/)
echo "Location: $script_file"
else
echo "✗ Script not found in .agents/"
fi
# Step 3: Check the broader repository
echo ""
echo "=== Broader search for seo-content-analyzer.py ==="
fd -t f "seo-content-analyzer.py"Repository: marcusquinn/aidevops
Length of output: 538
🏁 Script executed:
#!/bin/bash
# Step 1: Check seo-content-analyzer.py structure and available commands
echo "=== seo-content-analyzer.py content ==="
wc -l .agents/scripts/seo-content-analyzer.py
# Step 2: Check for 'intent' command in the script
echo ""
echo "=== Searching for 'intent' command ==="
grep -n "intent" .agents/scripts/seo-content-analyzer.py | head -20
# Step 3: Check if script has argparse or command structure
echo ""
echo "=== Script structure (first 50 lines) ==="
head -50 .agents/scripts/seo-content-analyzer.pyRepository: marcusquinn/aidevops
Length of output: 2471
Fix script path reference to match repository structure.
The script IS included in this PR at .agents/scripts/seo-content-analyzer.py, and the 'intent' command is properly implemented. However, the command in seo-write.md (line 16) references ~/.aidevops/agents/scripts/seo-content-analyzer.py, which doesn't align with the actual repository structure at .agents/scripts/. This path should either use a relative path (e.g., ../seo-content-analyzer.py) or a consistent reference to the actual location if deploying within the repository context. The current path reference will cause execution failures unless ~/.aidevops is explicitly configured as a deployment target.
🤖 Prompt for AI Agents
In @.agents/scripts/commands/seo-write.md around lines 15 - 17, Update the
command example in seo-write.md to reference the repository script location
instead of the home-directory path: replace the invocation that points to
"~/.aidevops/agents/scripts/seo-content-analyzer.py" with a relative or
repo-root path to ".agents/scripts/seo-content-analyzer.py" (e.g.,
"../seo-content-analyzer.py" or ".agents/scripts/seo-content-analyzer.py") so
the documented call to run python3 with the "intent" argument matches the actual
script name seo-content-analyzer.py and will execute correctly within the repo
context.
| import re | ||
| import json | ||
| import os | ||
| from collections import Counter |
There was a problem hiding this comment.
Remove unused Counter import.
Flagged by static analysis — Counter from collections is imported but never used anywhere in the script.
-from collections import Counter📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from collections import Counter |
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[warning] 27-27: .agents/scripts/seo-content-analyzer.py#L27
'collections.Counter' imported but unused (F401)
[warning] 27-27: .agents/scripts/seo-content-analyzer.py#L27
Unused Counter imported from collections
🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py at line 27, The import "Counter"
from the collections module is unused; remove the unused symbol from the imports
(delete or modify the line containing "from collections import Counter") so the
script no longer imports Counter unnecessarily and static analysis warnings
stop; alternatively, if you intended to use Counter, implement its usage in the
relevant function instead of leaving it unused.
| secondary_results = [] | ||
| for kw in secondary_keywords: | ||
| secondary_results.append( | ||
| self._analyze_keyword(content, kw, word_count, sections, target_density * 0.5) | ||
| ) |
There was a problem hiding this comment.
Bug: secondary keyword name always shows '?' in recommendations.
_analyze_keyword returns a dict without a 'keyword' key. On line 258, you wrap the primary result with {"keyword": primary_keyword, **primary}, but secondary results on line 250 are appended raw. So s.get('keyword', '?') on line 374 always falls back to '?'.
Fix: include keyword name in secondary results
secondary_results = []
for kw in secondary_keywords:
- secondary_results.append(
- self._analyze_keyword(content, kw, word_count, sections, target_density * 0.5)
- )
+ result = self._analyze_keyword(content, kw, word_count, sections, target_density * 0.5)
+ secondary_results.append({"keyword": kw, **result})Also applies to: 372-374
🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py around lines 248 - 252, The
secondary_results entries are missing the 'keyword' key because _analyze_keyword
returns a dict without it, so when recommendations later call
s.get('keyword','?') they get '?'; fix by wrapping each secondary result the
same way the primary result is wrapped—when appending to secondary_results
replace appending the raw dict from self._analyze_keyword(content, kw, ...) with
appending {"keyword": kw, **secondary} (use the same pattern used for the
primary result), ensuring every entry in secondary_results includes the
'keyword' field.
| def _analyze_keyword(self, content, keyword, word_count, sections, target) -> Dict[str, Any]: | ||
| cl = content.lower() | ||
| kl = keyword.lower() | ||
| count = cl.count(kl) | ||
| density = (count / word_count * 100) if word_count > 0 else 0 |
There was a problem hiding this comment.
Keyword counting uses substring match — inflates density for short keywords.
cl.count(kl) on line 286 is a plain substring search. A keyword like "art" would match inside "start", "article", "smart", etc., inflating the density count and potentially triggering false stuffing warnings.
Consider word-boundary matching:
Proposed fix using regex word boundaries
- cl = content.lower()
- kl = keyword.lower()
- count = cl.count(kl)
+ kl = keyword.lower()
+ pattern = r'\b' + re.escape(kl) + r'\b'
+ count = len(re.findall(pattern, content.lower()))The same substring issue exists in _detect_stuffing (line 342) and heading checks (lines 296, 300) — apply consistently.
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[warning] 283-283: .agents/scripts/seo-content-analyzer.py#L283
Method _analyze_keyword has a cyclomatic complexity of 12 (limit is 8)
🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py around lines 283 - 287, The keyword
counting is using substring matches (cl.count(kl)) which inflates counts for
short keywords; change _analyze_keyword to perform word-boundary matching using
regex (use re.escape(keyword) with r'\b' boundaries and re.IGNORECASE or compile
once) and compute count from len(re.findall(...)) so density is correct; apply
the same word-boundary regex approach in _detect_stuffing and in the heading
checks (the functions/methods that inspect headings) to ensure matches only on
whole words and remain case-insensitive, and reuse or cache the compiled pattern
for each keyword to avoid repeated compilation.
| secondary_str = parsed["flags"].get("secondary", "") | ||
| secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else [] |
There was a problem hiding this comment.
--secondary without a value causes AttributeError.
If a user passes --secondary --keyword "foo", parse_args sets secondary to True (boolean). Line 683 then calls True.split(",") which raises AttributeError.
Guard against non-string flag values
secondary_str = parsed["flags"].get("secondary", "")
+ if not isinstance(secondary_str, str):
+ secondary_str = ""
secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else []📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| secondary_str = parsed["flags"].get("secondary", "") | |
| secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else [] | |
| secondary_str = parsed["flags"].get("secondary", "") | |
| if not isinstance(secondary_str, str): | |
| secondary_str = "" | |
| secondary = [s.strip() for s in secondary_str.split(",") if s.strip()] if secondary_str else [] |
🤖 Prompt for AI Agents
In @.agents/scripts/seo-content-analyzer.py around lines 682 - 683, The code
assumes parsed["flags"].get("secondary") returns a string but parse_args can set
it to a boolean (True) when the flag is passed without a value, causing
AttributeError on secondary_str.split. Fix by guarding the value before
splitting: check the retrieved secondary_str from parsed["flags"] (the variable
named secondary_str) and only call .split(",") when isinstance(secondary_str,
str); otherwise treat it as empty (so secondary becomes []). Update the logic
that builds secondary (the secondary variable) to coerce non-string flag values
to an empty string before splitting.
| 1. **Research**: `/keyword-research` + `/autocomplete-research` | ||
| 2. **Write**: Use `content/seo-writer.md` with keyword targets | ||
| 3. **Analyze**: `seo-content-analyzer.py analyze` for quality score | ||
| 4. **Optimize**: Address issues from `seo/seo-optimizer.md` | ||
| 5. **Edit**: `content/editor.md` for human voice | ||
| 6. **Publish**: Via WordPress or CMS |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== Step 1: Check if seo.md exists and examine lines 304-309 ==="
if [ -f ".agents/seo.md" ]; then
echo "seo.md found. Showing lines 304-309:"
sed -n '304,309p' .agents/seo.md
else
echo "seo.md not found"
fi
echo ""
echo "=== Step 2: Check if seo-write.md exists ==="
if [ -f ".agents/scripts/commands/seo-write.md" ]; then
echo "seo-write.md found"
elif [ -f ".agents/seo-write.md" ]; then
echo "seo-write.md found at .agents/seo-write.md"
else
echo "Searching for seo-write.md..."
find .agents -name "*seo-write*" -o -name "*seo*write*"
fi
echo ""
echo "=== Step 3: Search for 6-step workflow references ==="
rg -i "research.*write.*analyze.*optimize.*edit.*publish|Research.*Write.*Analyze.*Optimize.*Edit.*Publish" .agents/ --max-count 10
echo ""
echo "=== Step 4: Search for numbered workflow steps ==="
rg "^\d+\.\s+\*\*Research\*\*|^\d+\.\s+\*\*Write\*\*" .agents/ -g '*.md' -C 1
echo ""
echo "=== Step 5: List all markdown files in .agents/ ==="
find .agents -name "*.md" -type f | head -20
echo ""
echo "=== Step 6: Check for duplicate instruction patterns ==="
rg "Content analysis workflow|workflow.*steps|6.*step.*workflow" .agents/ -g '*.md'Repository: marcusquinn/aidevops
Length of output: 2822
🏁 Script executed:
#!/bin/bash
echo "=== Reading full seo-write.md to check workflow steps ==="
cat -n .agents/scripts/commands/seo-write.md | head -100
echo ""
echo "=== Checking for all 6 steps in seo-write.md ==="
rg "Research|Write|Analyze|Optimize|Edit|Publish" .agents/scripts/commands/seo-write.md -nRepository: marcusquinn/aidevops
Length of output: 2383
Align workflow definitions between seo.md and seo-write.md.
The Content analysis workflow at .agents/seo.md:304-309 (Research → Write → Analyze → Optimize → Edit → Publish) differs from the actual workflow in .agents/scripts/commands/seo-write.md:11-46 (Research → Context → Write → Analyze → Fix → Output). Align these workflows to ensure consistent documentation and prevent confusion across the automation framework.
🤖 Prompt for AI Agents
In @.agents/seo.md around lines 304 - 309, The workflow steps in .agents/seo.md
(Research → Write → Analyze → Optimize → Edit → Publish) do not match the actual
pipeline implemented in .agents/scripts/commands/seo-write.md (Research →
Context → Write → Analyze → Fix → Output); update .agents/seo.md to mirror the
real workflow names and order used in seo-write.md (replace "Write" with
"Context → Write", replace "Optimize" with "Fix", and replace "Publish" with
"Output"), or alternatively update seo-write.md to match the documented
sequence—ensure the step names (Research, Context, Write, Analyze, Fix, Output)
are used consistently across both files and adjust any referenced artifacts
(e.g., content/seo-writer.md, seo/seo-optimizer.md, content/editor.md) to
reflect the chosen canonical flow.
| ## Revision Workflow | ||
|
|
||
| 1. Run analysis | ||
| 2. Fix critical placement gaps first | ||
| 3. Adjust density if outside 1-2% range | ||
| 4. Add LSI keywords for topical coverage | ||
| 5. Re-run analysis to verify |
There was a problem hiding this comment.
Minor density range inconsistency in revision workflow.
Step 3 says "Adjust density if outside 1-2% range" but the density table above (lines 43–47) defines optimal as 1.2–1.8%. Consider aligning: "outside 1.2–1.8% range".
🤖 Prompt for AI Agents
In @.agents/seo/keyword-mapper.md around lines 85 - 91, Update the Revision
Workflow step 3 wording to match the density table by changing "Adjust density
if outside 1-2% range" to "Adjust density if outside 1.2–1.8% range" in the
"Revision Workflow" section (the list under the "## Revision Workflow" header)
so the density guidance is consistent with the optimal range defined earlier.
| ## On-Page SEO Checklist | ||
|
|
||
| ### Title Tag | ||
|
|
||
| - [ ] Contains primary keyword (preferably near start) | ||
| - [ ] 50-60 characters | ||
| - [ ] Unique across site | ||
| - [ ] Compelling for click-through | ||
|
|
||
| ### Meta Description | ||
|
|
||
| - [ ] Contains primary keyword | ||
| - [ ] 150-160 characters | ||
| - [ ] Includes call-to-action | ||
| - [ ] Unique and descriptive | ||
|
|
||
| ### Headings | ||
|
|
||
| - [ ] Single H1 with primary keyword | ||
| - [ ] 4-6 H2 sections minimum | ||
| - [ ] 2-3 H2s contain keyword or variation | ||
| - [ ] Proper hierarchy (H1 > H2 > H3) | ||
| - [ ] Descriptive, not generic | ||
|
|
||
| ### Content | ||
|
|
||
| - [ ] Primary keyword in first 100 words | ||
| - [ ] Keyword density 1-2% | ||
| - [ ] 2000+ words (check competitor benchmark) | ||
| - [ ] Natural keyword integration | ||
| - [ ] Secondary keywords included | ||
| - [ ] Answers search intent | ||
|
|
||
| ### Links | ||
|
|
||
| - [ ] 3-5 internal links with descriptive anchors | ||
| - [ ] 2-3 external links to authority sources | ||
| - [ ] No broken links | ||
| - [ ] Links open appropriately (internal: same tab, external: new tab) | ||
|
|
||
| ### Media | ||
|
|
||
| - [ ] Images have descriptive alt text with keyword where natural | ||
| - [ ] Images are compressed and properly sized | ||
| - [ ] At least 1 image per 500 words | ||
|
|
||
| ### Technical | ||
|
|
||
| - [ ] URL contains keyword (short, descriptive) | ||
| - [ ] Schema markup where applicable | ||
| - [ ] Mobile-friendly layout | ||
| - [ ] Fast page load time | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find these files and check their existence
echo "=== Locating files ==="
fd -e md "seo-optimizer|seo-writer" .agents/
echo ""
echo "=== Checking seo-optimizer.md content (lines 27-79) ==="
if [ -f ".agents/seo/seo-optimizer.md" ]; then
sed -n '27,79p' .agents/seo/seo-optimizer.md
fi
echo ""
echo "=== Full seo-writer.md content ==="
if [ -f ".agents/content/seo-writer.md" ]; then
cat .agents/content/seo-writer.md
fiRepository: marcusquinn/aidevops
Length of output: 5318
Remove or reference the On-Page SEO Checklist to avoid duplication with seo-writer.md.
The checklist items (lines 27-79) substantially duplicate the "Content Requirements" table in .agents/content/seo-writer.md section 3—including identical specifications for:
- Meta title (50-60 chars with keyword)
- Meta description (150-160 chars with keyword)
- Keyword density (1-2%)
- H1 and H2 keyword placement
- Word count (2000+ words)
- Internal/external link counts and anchor text
Instead of repeating these requirements, reference seo-writer.md:Content Requirements and reserve seo-optimizer.md for optimization-specific guidance (e.g., improving existing pages, competitive analysis, ranking monitoring).
🤖 Prompt for AI Agents
In @.agents/seo/seo-optimizer.md around lines 27 - 79, Remove the duplicated
"On-Page SEO Checklist" in seo-optimizer.md and replace it with a concise
reference to .agents/content/seo-writer.md's "Content Requirements" (so all meta
title/description, H1/H2, keyword density, word count, and linking specs are
sourced there); then update seo-optimizer.md to focus on optimization-specific
guidance (e.g., improving existing pages, audit/checklist items for competitive
analysis, on-page A/B tests, performance monitoring, and ranking tracking) and
ensure any remaining checklist items are unique to optimization tasks and not
covered by seo-writer.md:Content Requirements.



Summary
Files Changed (15 new files)
.agents/scripts/seo-content-analyzer.py.agents/seo/content-analyzer.md.agents/seo/seo-optimizer.md.agents/seo/keyword-mapper.md.agents/content/seo-writer.md.agents/content/editor.md.agents/content/internal-linker.md.agents/content/meta-creator.md.agents/content/context-templates.md.agents/scripts/commands/seo-analyze-content.md.agents/scripts/commands/seo-optimize.md.agents/scripts/commands/seo-write.md.agents/seo.md.agents/content.md.agents/subagent-index.toonSummary by CodeRabbit
Release Notes
New Features
Documentation