Skip to content

chore: add a bunch of automations#1132

Merged
Crunchyman-ralph merged 4 commits intonextfrom
chore/add.claude.workflows
Aug 13, 2025
Merged

chore: add a bunch of automations#1132
Crunchyman-ralph merged 4 commits intonextfrom
chore/add.claude.workflows

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Aug 13, 2025

What type of PR is this?

  • 🐛 Bug fix
  • ✨ Feature
  • 🔌 Integration
  • 📝 Docs
  • 🧹 Refactor
  • Other:

Description

Related Issues

How to Test This

# Example commands or steps

Expected result:

Contributor Checklist

  • Created changeset: npm run changeset
  • Tests pass: npm test
  • Format check passes: npm run format-check (or npm run format to fix)
  • Addressed CodeRabbit comments (if any)
  • Linked related issues (if any)
  • Manually tested the changes

Changelog Entry


For Maintainers

  • PR title follows conventional commits
  • Target branch correct
  • Labels added
  • Milestone assigned (if applicable)

Summary by CodeRabbit

  • New Features

    • Automated detection and commenting of possible duplicate issues.
    • Daily auto-closing of confirmed duplicates with notifications.
    • Manual backfill workflow to process older issues.
    • Automatic issue triage and labeling on new issues.
    • On-demand Claude assistance when mentioned in issues/comments.
    • Logging of issue opened/closed events to analytics.
    • Weekly summary of issue and PR metrics posted to Discord.
  • Documentation

    • Added a command specification for deduplicating GitHub issues.
  • Chores

    • Added supporting automation scripts and GitHub Actions to enable the above.

@changeset-bot
Copy link

changeset-bot bot commented Aug 13, 2025

⚠️ No Changeset found

Latest commit: 65e0be5

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Walkthrough

Adds a Claude command spec for deduping issues, two Node.js scripts to detect/backfill/auto-close duplicates, and multiple GitHub Actions workflows for dedupe runs, triage, telemetry (Statsig), backfill, scheduled auto-close, and weekly Discord metrics.

Changes

Cohort / File(s) Summary
Claude command spec
.claude/commands/dedupe.md
New Claude command specification describing a multi-agent dedupe workflow, pre-check rules, a strict comment template for reporting up to three duplicates, and usage notes (gh-only).
Automation scripts
.github/scripts/auto-close-duplicates.mjs, .github/scripts/backfill-duplicate-comments.mjs
New Node.js scripts: auto-close scans bot duplicate comments and closes issues after a 3-day cutoff (labels, state_reason, comment); backfill scans historical issues and optionally dispatches the dedupe workflow (DRY_RUN support, delays).
Workflows — Dedupe pipeline
.github/workflows/claude-dedupe-issues.yml, .github/workflows/auto-close-duplicates.yml, .github/workflows/backfill-duplicate-comments.yml
New workflows: Claude dedupe on issue open/dispatch, scheduled/manual auto-close running the auto-close script, and backfill workflow to run the backfill script (inputs: days_back, dry_run).
Workflows — Claude automations
.github/workflows/claude.yml, .github/workflows/claude-issue-triage.yml
New workflows: general Claude Code trigger when content mentions "@claude"; issue triage workflow using Claude Code + MCP to apply labels only (no comments).
Workflows — Telemetry & reporting
.github/workflows/log-issue-events.yml, .github/workflows/weekly-metrics-discord.yml
New workflows to log issue opened/closed events to Statsig (with enrichment for closures) and to compile/send weekly issue/PR metrics to Discord via webhook.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant GitHub
  participant DedupeWF as Claude Dedupe Workflow
  participant Claude as Claude Code
  participant AutoClose as Auto-close Script

  User->>GitHub: Open issue
  GitHub-->>DedupeWF: Trigger (issues.opened or workflow_dispatch)
  DedupeWF->>Claude: Run dedupe prompt/tools
  Claude-->>GitHub: Post "possible duplicate" bot comment
  Note over GitHub: Wait period (e.g., 3 days)
  AutoClose->>GitHub: Scheduled run scans issues/comments/reactions
  AutoClose->>GitHub: Close issue, add label, post auto-close comment (if criteria met)
Loading
sequenceDiagram
  participant Maintainer
  participant BackfillWF as Backfill Workflow
  participant BackfillScript as Backfill Script
  participant GitHub
  participant DedupeWF as Claude Dedupe Workflow

  Maintainer->>BackfillWF: workflow_dispatch (days_back, dry_run)
  BackfillWF->>BackfillScript: Run with inputs
  BackfillScript->>GitHub: Scan recent issues for missing dedupe comments
  alt DRY_RUN=false
    BackfillScript->>GitHub: Dispatch Claude Dedupe Workflow per candidate
    GitHub-->>DedupeWF: Run dedupe workflow
  else DRY_RUN=true
    BackfillScript-->>Maintainer: Log intended dispatches
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • eyaltoledano

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the "Integrations" page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/add.claude.workflows

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🔭 Outside diff range comments (1)
.github/workflows/claude.yml (1)

1-37: Pin GitHub Action to Specific Commit SHA for Security

To ensure immutability and prevent the beta tag from being repointed, please pin the anthropics/claude-code-action to a specific commit SHA. The action is confirmed as official (github.com/anthropics/claude-code-action), but tagging by SHA guarantees reproducible workflows and mitigates supply-chain risks.

Apply this change:

-        uses: anthropics/claude-code-action@beta
+        uses: anthropics/claude-code-action@<commit-sha>  # replace <commit-sha> with the latest stable SHA from https://github.com/anthropics/claude-code-action/commits/main
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3dee60d and 9e13e78.

📒 Files selected for processing (10)
  • .claude/commands/dedupe.md (1 hunks)
  • .github/scripts/auto-close-duplicates.mjs (1 hunks)
  • .github/scripts/backfill-duplicate-comments.mjs (1 hunks)
  • .github/workflows/auto-close-duplicates.yml (1 hunks)
  • .github/workflows/backfill-duplicate-comments.yml (1 hunks)
  • .github/workflows/claude-dedupe-issues.yml (1 hunks)
  • .github/workflows/claude-issue-triage.yml (1 hunks)
  • .github/workflows/claude.yml (1 hunks)
  • .github/workflows/log-issue-events.yml (1 hunks)
  • .github/workflows/weekly-metrics-discord.yml (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md
📚 Learning: 2025-08-06T21:13:05.240Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1091
File: assets/claude/agents/task-orchestrator.md:69-78
Timestamp: 2025-08-06T21:13:05.240Z
Learning: For Claude agent instruction files in .claude/agents/ and assets/claude/agents/ directories, ignore markdownlint issues (like MD031 blanks-around-fences, MD040 fenced-code-language, MD041 missing top-level heading) as these files follow Claude's specific agent format requirements rather than standard markdown documentation conventions.

Applied to files:

  • .claude/commands/dedupe.md
📚 Learning: 2025-08-11T12:30:23.843Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md

Applied to files:

  • .claude/commands/dedupe.md
🧬 Code Graph Analysis (1)
.github/scripts/backfill-duplicate-comments.mjs (1)
.github/scripts/auto-close-duplicates.mjs (2)
  • token (64-64)
  • response (4-13)
🪛 YAMLlint (1.37.1)
.github/workflows/weekly-metrics-discord.yml

[warning] 4-4: truthy value should be one of [false, true]

(truthy)


[warning] 6-6: too few spaces before comment: expected 2

(comments)


[error] 81-81: trailing spaces

(trailing-spaces)


[error] 85-85: trailing spaces

(trailing-spaces)


[error] 89-89: trailing spaces

(trailing-spaces)

🪛 LanguageTool
.claude/commands/dedupe.md

[grammar] ~6-~6: Use correct spacing
Context: ...plicate issues for a given GitHub issue. To do this, follow these steps precisely...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[style] ~8-~8: Consider a more expressive alternative.
Context: ...te issues for a given GitHub issue. To do this, follow these steps precisely: 1....

(DO_ACHIEVE)


[grammar] ~8-~8: Use correct spacing
Context: ...o do this, follow these steps precisely: 1. Use an agent to check if the Github issu...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~10-~10: Use proper capitalization
Context: ...isely: 1. Use an agent to check if the Github issue (a) is closed, (b) does not need ...

(QB_NEW_EN_OTHER_ERROR_IDS_6)


[grammar] ~10-~10: There might be a mistake here.
Context: ... (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedba...

(QB_NEW_EN_OTHER)


[grammar] ~10-~10: There might be a mistake here.
Context: ...losed, (b) does not need to be deduped (eg. because it is broad product feedback wi...

(QB_NEW_EN_OTHER)


[grammar] ~10-~10: Use commas correctly
Context: ...uct feedback without a specific solution, or positive feedback), or (c) already h...

(QB_NEW_EN_OTHER_ERROR_IDS_33)


[grammar] ~10-~10: Make sure to use plural and singular nouns correctly
Context: ...ositive feedback), or (c) already has a duplicates comment that you made earlier. If so, d...

(QB_NEW_EN_OTHER_ERROR_IDS_10)


[grammar] ~11-~11: Use proper capitalization
Context: ... not proceed. 2. Use an agent to view a Github issue, and ask the agent to return a su...

(QB_NEW_EN_OTHER_ERROR_IDS_6)


[grammar] ~11-~11: There might be a mistake here.
Context: ...e agent to return a summary of the issue 3. Then, launch 5 parallel agents to search...

(QB_NEW_EN_OTHER)


[grammar] ~12-~12: Use proper capitalization
Context: ...hen, launch 5 parallel agents to search Github for duplicates of this issue, using div...

(QB_NEW_EN_OTHER_ERROR_IDS_6)


[grammar] ~12-~12: There might be a mistake here.
Context: ...ch approaches, using the summary from #1 4. Next, feed the results from #1 and #2 in...

(QB_NEW_EN_OTHER)


[grammar] ~13-~13: Use commas correctly
Context: ...esults from #1 and #2 into another agent, so that it can filter out false positiv...

(QB_NEW_EN_OTHER_ERROR_IDS_33)


[grammar] ~13-~13: Use commas correctly
Context: ...o that it can filter out false positives, that are likely not actually duplicates...

(QB_NEW_EN_OTHER_ERROR_IDS_33)


[grammar] ~14-~14: There might be a mistake here.
Context: ...zero, if there are no likely duplicates) Notes (be sure to tell this to your agen...

(QB_NEW_EN_OTHER)


[grammar] ~16-~16: Use correct spacing
Context: ... sure to tell this to your agents, too): - Use gh to interact with Github, rather...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~18-~18: Use proper capitalization
Context: ...nts, too): - Use gh to interact with Github, rather than web fetch - Do not use othe...

(QB_NEW_EN_OTHER_ERROR_IDS_6)


[grammar] ~18-~18: There might be a mistake here.
Context: ...eract with Github, rather than web fetch - Do not use other tools, beyond gh (eg....

(QB_NEW_EN_OTHER)


[grammar] ~19-~19: Use commas correctly
Context: ... than web fetch - Do not use other tools, beyond gh (eg. don't use other MCP se...

(QB_NEW_EN_OTHER_ERROR_IDS_33)


[grammar] ~19-~19: There might be a mistake here.
Context: ... - Do not use other tools, beyond gh (eg. don't use other MCP servers, file edit,...

(QB_NEW_EN_OTHER)


[grammar] ~19-~19: There might be a mistake here.
Context: ... use other MCP servers, file edit, etc.) - Make a todo list first - For your commen...

(QB_NEW_EN_OTHER)


[grammar] ~20-~20: There might be a mistake here.
Context: ... MCP servers, file edit, etc.) - Make a todo list first - For your comment, follow t...

(QB_NEW_EN_OTHER)


[grammar] ~20-~20: There might be a mistake here.
Context: ...ile edit, etc.) - Make a todo list first - For your comment, follow the following f...

(QB_NEW_EN_OTHER)


[grammar] ~21-~21: Use correct spacing
Context: ... that you found 3 suspected duplicates): --- Found 3 possible duplicate issues: 1. <...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~25-~25: Use correct spacing
Context: ... --- Found 3 possible duplicate issues: 1. 2. 3. This issue will be automatically closed ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~31-~31: Use correct spacing
Context: ...tically closed as a duplicate in 3 days. - If your issue is a duplicate, please clo...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~34-~34: There might be a mistake here.
Context: ... it and 👍 the existing issue instead - To prevent auto-closure, add a comment or...

(QB_NEW_EN)


[grammar] ~35-~35: There might be a mistake here.
Context: ...sure, add a comment or 👎 this comment 🤖 Generated with [Claude Code](https://cl...

(QB_NEW_EN)

🪛 markdownlint-cli2 (0.17.2)
.claude/commands/dedupe.md

6-6: First line in a file should be a top-level heading

(MD041, first-line-heading, first-line-h1)


27-27: Inline HTML
Element: link

(MD033, no-inline-html)


28-28: Inline HTML
Element: link

(MD033, no-inline-html)


29-29: Inline HTML
Element: link

(MD033, no-inline-html)


38-38: Files should end with a single newline character

(MD047, single-trailing-newline)

🪛 actionlint (1.7.7)
.github/workflows/log-issue-events.yml

20-20: "github.event.issue.title" is potentially untrusted. avoid using it directly in inline scripts. instead, pass it through an environment variable. see https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions for more details

(expression)


84-84: "github.event.issue.title" is potentially untrusted. avoid using it directly in inline scripts. instead, pass it through an environment variable. see https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions for more details

(expression)

🔇 Additional comments (12)
.github/workflows/claude-issue-triage.yml (2)

99-105: No change needed for custom workflows
The workflow in .github/workflows/claude-issue-triage.yml is implementing a custom triage step—so using anthropics/claude-code-base-action@beta (the lower-level/base Action for direct prompts) is intentional and correct. The higher-level anthropics/claude-code-action@beta is only for out-of-the-box PR/issue integration.

Likely an incorrect or invalid review comment.


88-88: Pin to a full SHA-256 digest for immutability
We’ve confirmed that ghcr.io/github/github-mcp-server:sha-7aced2b is the official GitHub MCP Server image, but using the short tag still risks unpinned updates. Please fetch the complete SHA-256 digest for that tag (via the GHCR web UI or GitHub CLI) and update line 88 to:

- "ghcr.io/github/github-mcp-server:sha-7aced2b"
+ "ghcr.io/github/github-mcp-server@sha256:<FULL_DIGEST>"

• Obtain the digest from https://ghcr.io/github/github-mcp-server or
gh api /orgs/github/packages/container/github-mcp-server/versions | jq
• Verify <FULL_DIGEST> matches the intended image before merging.

.claude/commands/dedupe.md (1)

1-38: Review the Claude command specification structure

The dedupe command specification appears well-structured for Claude automation. The allowed-tools configuration properly restricts to GitHub CLI operations only, which is good for security. The multi-agent workflow with parallelization (5 agents) for searching duplicates is an efficient approach.

.github/workflows/auto-close-duplicates.yml (1)

1-32: LGTM! Well-structured workflow with appropriate permissions and scheduling.

The workflow configuration is solid with proper timeout settings, appropriate permissions, and sensible environment variable passing. The daily schedule at 9 AM UTC provides good coverage for automated duplicate management.

.github/workflows/log-issue-events.yml (1)

118-122: Verify state_reason values for duplicate detection

I ran a repository‐wide search for state_reason and found no additional references. Please confirm against GitHub’s official docs (REST API state_reason field or GraphQL IssueStateReason enum) that "duplicate" is indeed the only value used when an issue is closed as a duplicate. If there are other possible values (for example, from future API changes), update these lines in .github/workflows/log-issue-events.yml (118–122) to handle them accordingly.

.github/workflows/backfill-duplicate-comments.yml (1)

1-47: LGTM! Well-designed workflow with appropriate safety controls.

The workflow is well-structured with:

  • Proper input validation with reasonable defaults
  • Safety-first approach with dry_run defaulting to true
  • Appropriate permissions and timeout settings
  • Clear parameter descriptions for user guidance
.github/scripts/backfill-duplicate-comments.mjs (2)

3-22: LGTM! Consistent GitHub API helper function.

The githubRequest function follows the same pattern as in auto-close-duplicates.mjs with proper error handling and authentication.


38-39: Workflow file verified
The workflow file claude-dedupe-issues.yml exists at .github/workflows/claude-dedupe-issues.yml and matches the dispatch target in your script. No changes are needed.

.github/workflows/claude-dedupe-issues.yml (4)

26-33: LGTM! Proper Claude Code integration with secure token handling.

The Claude Code action is properly configured with the required API key and GitHub token passed through the secure environment.


29-29: Verify issue number handling for both trigger types.

The expression ${{ github.event.issue.number || inputs.issue_number }} correctly handles both automatic (issue opened) and manual (workflow_dispatch) triggers.


34-45: LGTM! Proper conditional logging with graceful degradation.

The if: always() ensures logging happens regardless of the Claude Code step outcome, and the graceful handling of missing STATSIG_API_KEY prevents workflow failures when logging is not configured.


51-51: Confirm GitHub Run ID for Statsig workflow_run_id

We searched all workflows and found this is the only usage of ${{ github.run_id }} for workflow_run_id. If Statsig expects the GitHub run ID as the identifier, no change is needed. Otherwise (for example, if you need to use github.run_number or another field), please update accordingly.

• File: .github/workflows/claude-dedupe-issues.yml
Line: 51, 60
Current code:

workflow_run_id: "${{ github.run_id }}"

• Action: Verify with the Statsig integration documentation or team that github.run_id is the intended value for workflow_run_id.


To do this, follow these steps precisely:

1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
Copy link
Contributor

@coderabbitai coderabbitai bot Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Clarify the agent's duplicate detection criteria

The instruction mentions checking if an issue "does not need to be deduped" with examples like broad product feedback, but doesn't provide clear criteria for what constitutes a valid duplicate candidate. Consider adding more specific criteria.

Apply this diff to improve clarity:

-1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
+1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (e.g., because it is broad product feedback without a specific solution, positive feedback, meta-discussions, or administrative issues), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (e.g., because it is broad product feedback without a specific solution, positive feedback, meta-discussions, or administrative issues), or (c) already has a duplicates comment that you made earlier. If so, do not proceed.
🧰 Tools
🪛 LanguageTool

[grammar] ~10-~10: Use proper capitalization
Context: ...isely: 1. Use an agent to check if the Github issue (a) is closed, (b) does not need ...

(QB_NEW_EN_OTHER_ERROR_IDS_6)


[grammar] ~10-~10: There might be a mistake here.
Context: ... (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedba...

(QB_NEW_EN_OTHER)


[grammar] ~10-~10: There might be a mistake here.
Context: ...losed, (b) does not need to be deduped (eg. because it is broad product feedback wi...

(QB_NEW_EN_OTHER)


[grammar] ~10-~10: Use commas correctly
Context: ...uct feedback without a specific solution, or positive feedback), or (c) already h...

(QB_NEW_EN_OTHER_ERROR_IDS_33)


[grammar] ~10-~10: Make sure to use plural and singular nouns correctly
Context: ...ositive feedback), or (c) already has a duplicates comment that you made earlier. If so, d...

(QB_NEW_EN_OTHER_ERROR_IDS_10)

🤖 Prompt for AI Agents
.claude/commands/dedupe.md around line 10: the current guidance is vague about
when an issue "does not need to be deduped" and lacks concrete criteria for what
counts as a valid duplicate candidate; update the line to list clear, actionable
criteria — e.g., require same root-user problem or identical error messages,
overlapping steps-to-reproduce, same affected feature and same expected vs
actual behavior, or explicit reference to the original issue number — and add
explicit non-duplicate examples (broad product feedback, feature requests with
different scopes, praise) so the agent can deterministically decide whether to
proceed with deduping.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nah, I think the prompt is fine like that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Crunchyman-ralph, understood! I'll respect your preference to keep the duplicate detection criteria as they are currently written. The existing prompt provides sufficient flexibility for the agent to make those determinations.

    /|   /|  
   (  -.-  ) 
  o_("_,)_"_o

🧠 Learnings used
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1091
File: assets/claude/agents/task-orchestrator.md:69-78
Timestamp: 2025-08-06T21:13:05.240Z
Learning: For Claude agent instruction files in .claude/agents/ and assets/claude/agents/ directories, ignore markdownlint issues (like MD031 blanks-around-fences, MD040 fenced-code-language, MD041 missing top-level heading) as these files follow Claude's specific agent format requirements rather than standard markdown documentation conventions.

Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md

Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1091
File: .claude/agents/task-orchestrator.md:0-0
Timestamp: 2025-08-06T21:11:37.259Z
Learning: For files in .claude/agents/ and assets/claude/agents/ directories, ignore markdownlint issues like MD041 (missing top-level heading) as these are Claude Code agent instruction files that follow a specific format with YAML frontmatter and don't need to conform to standard markdown documentation conventions.

Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1091
File: .claude/agents/task-executor.md:1-6
Timestamp: 2025-08-06T21:11:57.597Z
Learning: For Claude agent instruction files located in .claude/agents or assets/claude/agents directories, Crunchyman-ralph wants to ignore MarkdownLint issues since these files follow Claude's specific format requirements rather than standard markdown documentation conventions.

Comment on lines +3 to +22
async function githubRequest(endpoint, token, method = 'GET', body) {
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'auto-close-duplicates-script',
...(body && { 'Content-Type': 'application/json' })
},
...(body && { body: JSON.stringify(body) })
});

if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}

return response.json();
}
Copy link
Contributor

@coderabbitai coderabbitai bot Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add retry logic for GitHub API requests

The githubRequest function doesn't implement any retry logic for transient failures. GitHub API can have occasional hiccups, and without retries, the script might fail unnecessarily.

Apply this diff to add retry logic with exponential backoff:

-async function githubRequest(endpoint, token, method = 'GET', body) {
+async function githubRequest(endpoint, token, method = 'GET', body, retries = 3) {
+	let lastError;
+	
+	for (let attempt = 0; attempt < retries; attempt++) {
+		if (attempt > 0) {
+			// Exponential backoff: 1s, 2s, 4s
+			const delay = Math.pow(2, attempt - 1) * 1000;
+			await new Promise(resolve => setTimeout(resolve, delay));
+			console.log(`[DEBUG] Retry attempt ${attempt + 1} for ${endpoint}`);
+		}
+		
+		try {
 			const response = await fetch(`https://api.github.com${endpoint}`, {
 				method,
 				headers: {
 					Authorization: `Bearer ${token}`,
 					Accept: 'application/vnd.github.v3+json',
 					'User-Agent': 'auto-close-duplicates-script',
 					...(body && { 'Content-Type': 'application/json' })
 				},
 				...(body && { body: JSON.stringify(body) })
 			});
 
 			if (!response.ok) {
+				// Don't retry on 4xx errors (except 429)
+				if (response.status >= 400 && response.status < 500 && response.status !== 429) {
 					throw new Error(
 						`GitHub API request failed: ${response.status} ${response.statusText}`
 					);
+				}
+				// Retry on 429 (rate limit) and 5xx errors
+				lastError = new Error(
+					`GitHub API request failed: ${response.status} ${response.statusText}`
+				);
+				continue;
 			}
 
 			return response.json();
+		} catch (error) {
+			lastError = error;
+			// Network errors should be retried
+			if (error.name === 'FetchError' || error.name === 'TypeError') {
+				continue;
+			}
+			throw error;
+		}
+	}
+	
+	throw lastError || new Error('Request failed after retries');
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function githubRequest(endpoint, token, method = 'GET', body) {
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'auto-close-duplicates-script',
...(body && { 'Content-Type': 'application/json' })
},
...(body && { body: JSON.stringify(body) })
});
if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}
return response.json();
}
async function githubRequest(endpoint, token, method = 'GET', body, retries = 3) {
let lastError;
for (let attempt = 0; attempt < retries; attempt++) {
if (attempt > 0) {
// Exponential backoff: 1s, 2s, 4s
const delay = Math.pow(2, attempt - 1) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
console.log(`[DEBUG] Retry attempt ${attempt + 1} for ${endpoint}`);
}
try {
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'auto-close-duplicates-script',
...(body && { 'Content-Type': 'application/json' })
},
...(body && { body: JSON.stringify(body) })
});
if (!response.ok) {
// Don't retry on 4xx errors (except 429)
if (response.status >= 400 && response.status < 500 && response.status !== 429) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}
// Retry on 429 (rate limit) and 5xx errors
lastError = new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
continue;
}
return response.json();
} catch (error) {
lastError = error;
// Network errors should be retried
if (error.name === 'FetchError' || error.name === 'TypeError') {
continue;
}
throw error;
}
}
throw lastError || new Error('Request failed after retries');
}
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 3 to 22, the
githubRequest function lacks retry logic for transient GitHub API failures; wrap
the fetch call in a retry loop (e.g., maxAttempts = 3-5) that catches
network/fetch errors and retries on server errors (5xx) and 429 rate-limit
responses, using exponential backoff with jitter between attempts (e.g.,
baseDelay * 2^(attempt-1) ± small random), and honor Retry-After header when
present; on a successful response return response.json(), and after exhausting
retries throw a descriptive error including last status and body; keep existing
headers/content-type behavior and ensure delays are awaited (use a small sleep
helper).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just trying to poc right now, while your suggestion is valid, I'm skipping this for now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Crunchyman-ralph Understood! You're absolutely right to keep it simple during the POC phase. Adding retry logic can be a nice enhancement later once the core functionality is proven out. Feel free to ping me if you'd like to revisit this when you're ready to harden the production version.

Good luck with the POC! 🚀

Comment on lines +52 to +57
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.

If this is incorrect, please re-open this issue or create a new one.

🤖 Generated with [Task Master Bot]`
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Hardcoded bot reference in closure comment

The closure comment references "Task Master Bot" which is hardcoded and might not match the actual bot name being used. Consider making this configurable or more generic.

Apply this diff to make the bot reference more flexible:

 			body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
 
 If this is incorrect, please re-open this issue or create a new one.
 
-🤖 Generated with [Task Master Bot]`
+🤖 Auto-closed by duplicate detection workflow`
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
If this is incorrect, please re-open this issue or create a new one.
🤖 Generated with [Task Master Bot]`
}
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
If this is incorrect, please re-open this issue or create a new one.
🤖 Auto-closed by duplicate detection workflow`
}
🤖 Prompt for AI Agents
.github/scripts/auto-close-duplicates.mjs around lines 52 to 57: the closing
comment body hardcodes "Task Master Bot"; change it to use a configurable bot
name (read from an environment variable or a config constant with a sensible
default) and inject that variable into the template string instead of the
literal. Add a top-level BOT_NAME constant (e.g., process.env.BOT_NAME || 'this
bot') or load from existing config, replace the hardcoded string in the body
with the variable, and ensure any README or deployment notes mention the new env
var.

Comment on lines +54 to +65
const token = process.env.GITHUB_TOKEN;
if (!token) {
throw new Error(`GITHUB_TOKEN environment variable is required

Usage:
node .github/scripts/backfill-duplicate-comments.mjs

Environment Variables:
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
DAYS_BACK - How many days back to look for old issues (default: 90)`);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Improve error message formatting and add environment variable validation.

The error message spans multiple lines without proper indentation, and there's no validation for the DAYS_BACK parameter.

Apply this diff to improve the error handling:

 	const token = process.env.GITHUB_TOKEN;
 	if (!token) {
-		throw new Error(`GITHUB_TOKEN environment variable is required
-
-Usage:
-  node .github/scripts/backfill-duplicate-comments.mjs
-
-Environment Variables:
-  GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
-  DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
-  DAYS_BACK - How many days back to look for old issues (default: 90)`);
+		throw new Error([
+			'GITHUB_TOKEN environment variable is required',
+			'',
+			'Usage:',
+			'  node .github/scripts/backfill-duplicate-comments.mjs',
+			'',
+			'Environment Variables:',
+			'  GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)',
+			'  DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)',
+			'  DAYS_BACK - How many days back to look for old issues (default: 90)'
+		].join('\n'));
 	}

Also add validation for the DAYS_BACK parameter:

 	const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
+	
+	if (isNaN(daysBack) || daysBack < 1) {
+		throw new Error('DAYS_BACK must be a positive integer');
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const token = process.env.GITHUB_TOKEN;
if (!token) {
throw new Error(`GITHUB_TOKEN environment variable is required
Usage:
node .github/scripts/backfill-duplicate-comments.mjs
Environment Variables:
GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)
DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)
DAYS_BACK - How many days back to look for old issues (default: 90)`);
}
const token = process.env.GITHUB_TOKEN;
if (!token) {
throw new Error([
'GITHUB_TOKEN environment variable is required',
'',
'Usage:',
' node .github/scripts/backfill-duplicate-comments.mjs',
'',
'Environment Variables:',
' GITHUB_TOKEN - GitHub personal access token with repo and actions permissions (required)',
' DRY_RUN - Set to "false" to actually trigger workflows (default: true for safety)',
' DAYS_BACK - How many days back to look for old issues (default: 90)'
].join('\n'));
}
const daysBack = parseInt(process.env.DAYS_BACK || '90', 10);
if (isNaN(daysBack) || daysBack < 1) {
throw new Error('DAYS_BACK must be a positive integer');
}
🤖 Prompt for AI Agents
In .github/scripts/backfill-duplicate-comments.mjs around lines 54-65, the
thrown error message is hard to read and the script does not validate the
DAYS_BACK env var; update the error to a clearly formatted single template
string (or build a message variable) that shows usage and environment variables
with proper newlines/indentation, and add validation for process.env.DAYS_BACK:
parseInt it, fallback to 90 if undefined, and throw a descriptive Error if it's
present but not a positive integer (NaN or <= 0); ensure DRY_RUN stays a string
checked elsewhere but mention the default in the error text.

run: |
ISSUE_NUMBER=${{ github.event.issue.number }}
REPO=${{ github.repository }}
ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security vulnerability: Untrusted input directly used in shell script.

The github.event.issue.title is potentially untrusted user input that's being directly interpolated into the shell script, creating a potential command injection vulnerability.

Apply this fix to pass the title through environment variables instead:

     steps:
       - name: Log issue creation to Statsig
         env:
           STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
+          ISSUE_TITLE: ${{ github.event.issue.title }}
         run: |
           ISSUE_NUMBER=${{ github.event.issue.number }}
           REPO=${{ github.repository }}
-          ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
+          # ISSUE_TITLE is now passed via environment variable (safer)
           AUTHOR="${{ github.event.issue.user.login }}"

And similarly for the log-issue-closed job:

       - name: Log issue closure to Statsig
         env:
           STATSIG_API_KEY: ${{ secrets.STATSIG_API_KEY }}
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+          ISSUE_TITLE: ${{ github.event.issue.title }}
         run: |
           ISSUE_NUMBER=${{ github.event.issue.number }}
           REPO=${{ github.repository }}
-          ISSUE_TITLE=$(echo '${{ github.event.issue.title }}' | sed "s/'/'\\\\''/g")
+          # ISSUE_TITLE is now passed via environment variable (safer)
           CLOSED_BY="${{ github.event.issue.closed_by.login }}"

Also applies to: 87-87

🤖 Prompt for AI Agents
In .github/workflows/log-issue-events.yml around lines 23 and 87, the workflow
currently injects github.event.issue.title directly into the shell which allows
command injection; instead, add an environment variable (e.g., ISSUE_TITLE) at
the job or step level set to ${{ github.event.issue.title }} and reference that
variable inside the run step (quoted) so the value is not re-interpreted by the
shell, then perform any needed quote-escaping/sanitization on the variable value
with shell-safe operations (use printf/parameter expansion or sed on the
variable, not by embedding the raw GitHub expression into the script). Ensure
both the log-issue-opened and log-issue-closed jobs are updated the same way.

Comment on lines +97 to +111
# Get additional issue data via GitHub API
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")

COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')

# Get reactions data
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")

REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')

Copy link
Contributor

@coderabbitai coderabbitai bot Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider error handling for GitHub API requests.

The workflow makes multiple GitHub API calls but doesn't handle potential failures. If the API is unavailable or rate-limited, the script will continue with potentially invalid data.

Add error handling around the API calls:

           # Get additional issue data via GitHub API
           echo "Fetching additional issue data for #${ISSUE_NUMBER}"
-          ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
+          ISSUE_DATA=$(curl -s -f -H "Authorization: token ${GITHUB_TOKEN}" \
             -H "Accept: application/vnd.github.v3+json" \
-            "https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
+            "https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}") || {
+            echo "Failed to fetch issue data, skipping Statsig logging"
+            exit 0
+          }

-          COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
+          COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments // 0')

           # Get reactions data
-          REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
+          REACTIONS_DATA=$(curl -s -f -H "Authorization: token ${GITHUB_TOKEN}" \
             -H "Accept: application/vnd.github.v3+json" \
-            "https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
+            "https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions") || {
+            echo "Failed to fetch reactions data, using 0"
+            REACTIONS_DATA="[]"
+          }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Get additional issue data via GitHub API
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
ISSUE_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}")
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments')
# Get reactions data
REACTIONS_DATA=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions")
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
# Get additional issue data via GitHub API
echo "Fetching additional issue data for #${ISSUE_NUMBER}"
ISSUE_DATA=$(curl -s -f -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}") || {
echo "Failed to fetch issue data, skipping Statsig logging"
exit 0
}
COMMENTS_COUNT=$(echo "$ISSUE_DATA" | jq -r '.comments // 0')
# Get reactions data
REACTIONS_DATA=$(curl -s -f -H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/${REPO}/issues/${ISSUE_NUMBER}/reactions") || {
echo "Failed to fetch reactions data, using 0"
REACTIONS_DATA="[]"
}
REACTIONS_COUNT=$(echo "$REACTIONS_DATA" | jq '. | length')
🤖 Prompt for AI Agents
In .github/workflows/log-issue-events.yml around lines 97-111, the GitHub API
curl calls for ISSUE_DATA and REACTIONS_DATA lack error handling; update the
workflow to (1) verify GITHUB_TOKEN is set before making requests, (2) capture
HTTP status codes from curl (e.g. via -w '%{http_code}') and treat non-2xx
responses as errors, (3) implement a small retry loop with exponential backoff
for transient failures/rate limits, (4) on persistent failure either fail the
job with a clear error message or fall back to safe defaults, and (5) validate
that jq outputs numeric counts (and handle empty/invalid JSON) before using
them. Ensure logs include the endpoint, status code and a short error message
for easier debugging.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just trying things out right now good suggestion, but passing on it for now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Crunchyman-ralph No problem at all! I understand you're experimenting with the automations right now. The error handling can always be added later when you're ready to make the workflows more production-ready. Feel free to reach out if you need any help as you continue building out these automations.

Comment on lines +113 to +116
CLOSED_AUTOMATICALLY="false"
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
CLOSED_AUTOMATICALLY="true"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Logic issue: Bot detection pattern may miss some bots.

The current pattern *"[bot]"* only matches usernames containing the literal string "[bot]", which may miss other bot naming conventions or official GitHub Apps.

Consider using a more comprehensive bot detection pattern:

-          if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
+          if [[ "$CLOSED_BY" == *"[bot]"* ]] || [[ "$CLOSED_BY" == *"-bot" ]] || [[ "$CLOSED_BY" == "bot-"* ]]; then
             CLOSED_AUTOMATICALLY="true"
           fi

Alternatively, you could check the type field from the GitHub API response, which more reliably identifies bots.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CLOSED_AUTOMATICALLY="false"
if [[ "$CLOSED_BY" == *"[bot]"* ]]; then
CLOSED_AUTOMATICALLY="true"
fi
CLOSED_AUTOMATICALLY="false"
if [[ "$CLOSED_BY" == *"[bot]"* ]] || [[ "$CLOSED_BY" == *"-bot" ]] || [[ "$CLOSED_BY" == "bot-"* ]]; then
CLOSED_AUTOMATICALLY="true"
fi
🤖 Prompt for AI Agents
.github/workflows/log-issue-events.yml around lines 113 to 116: the current
check only matches the literal string "[bot]" in the username and can miss other
bot naming conventions or GitHub Apps; replace the brittle string match with a
more robust detection by either (a) using a broader pattern/regex to detect
common bot naming (e.g., usernames that end with or contain the token "bot" or
"\[bot\]" in a case-insensitive way) or (b) preferably reading the actor/type
field from the GitHub event payload (or calling the API) and setting
CLOSED_AUTOMATICALLY="true" when the actor type is "Bot" or the app is a GitHub
App; update the conditional accordingly and add a fallback to the existing
"[bot]" check for backwards compatibility.

Comment on lines +49 to +71
if [ -f "issue_metrics.md" ]; then
# Extract key metrics using grep/awk
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
fi

if [ -f "pr_metrics.md" ]; then
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
fi

# Set outputs for Discord action
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for missing metric files

The script assumes metric files exist but doesn't handle the case where the github/issue-metrics action might fail or not generate files. This could cause the workflow to fail silently or produce incorrect results.

Apply this diff to add proper error handling:

       - name: Parse metrics
         id: metrics
         run: |
+          # Initialize variables with defaults
+          NUM_ISSUES_CREATED="0"
+          NUM_ISSUES_CLOSED="0"
+          NUM_PRS_CREATED="0"
+          NUM_PRS_MERGED="0"
+          AVG_TIME_TO_FIRST_RESPONSE="N/A"
+          AVG_TIME_TO_CLOSE="N/A"
+          PR_AVG_TIME_TO_MERGE="N/A"
+          
           # Parse the metrics from the generated markdown files
           if [ -f "issue_metrics.md" ]; then
             # Extract key metrics using grep/awk
             AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
             AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
             NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
             NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
+          else
+            echo "Warning: issue_metrics.md not found"
           fi
 
           if [ -f "pr_metrics.md" ]; then
             PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
             NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
             NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
+          else
+            echo "Warning: pr_metrics.md not found"
           fi
 
           # Set outputs for Discord action
-          echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
-          echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
-          echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
-          echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
-          echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
-          echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
-          echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
+          echo "issues_created=${NUM_ISSUES_CREATED}" >> $GITHUB_OUTPUT
+          echo "issues_closed=${NUM_ISSUES_CLOSED}" >> $GITHUB_OUTPUT
+          echo "prs_created=${NUM_PRS_CREATED}" >> $GITHUB_OUTPUT
+          echo "prs_merged=${NUM_PRS_MERGED}" >> $GITHUB_OUTPUT
+          echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE}" >> $GITHUB_OUTPUT
+          echo "avg_time_to_close=${AVG_TIME_TO_CLOSE}" >> $GITHUB_OUTPUT
+          echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE}" >> $GITHUB_OUTPUT
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ -f "issue_metrics.md" ]; then
# Extract key metrics using grep/awk
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
fi
if [ -f "pr_metrics.md" ]; then
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
fi
# Set outputs for Discord action
echo "issues_created=${NUM_ISSUES_CREATED:-0}" >> $GITHUB_OUTPUT
echo "issues_closed=${NUM_ISSUES_CLOSED:-0}" >> $GITHUB_OUTPUT
echo "prs_created=${NUM_PRS_CREATED:-0}" >> $GITHUB_OUTPUT
echo "prs_merged=${NUM_PRS_MERGED:-0}" >> $GITHUB_OUTPUT
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE:-N/A}" >> $GITHUB_OUTPUT
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE:-N/A}" >> $GITHUB_OUTPUT
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE:-N/A}" >> $GITHUB_OUTPUT
# Initialize variables with defaults
NUM_ISSUES_CREATED="0"
NUM_ISSUES_CLOSED="0"
NUM_PRS_CREATED="0"
NUM_PRS_MERGED="0"
AVG_TIME_TO_FIRST_RESPONSE="N/A"
AVG_TIME_TO_CLOSE="N/A"
PR_AVG_TIME_TO_MERGE="N/A"
if [ -f "issue_metrics.md" ]; then
# Extract key metrics using grep/awk
AVG_TIME_TO_FIRST_RESPONSE=$(grep -A 1 "Average time to first response" issue_metrics.md | tail -1 | xargs || echo "N/A")
AVG_TIME_TO_CLOSE=$(grep -A 1 "Average time to close" issue_metrics.md | tail -1 | xargs || echo "N/A")
NUM_ISSUES_CREATED=$(grep -oP '\d+(?= issues created)' issue_metrics.md || echo "0")
NUM_ISSUES_CLOSED=$(grep -oP '\d+(?= issues closed)' issue_metrics.md || echo "0")
else
echo "Warning: issue_metrics.md not found"
fi
if [ -f "pr_metrics.md" ]; then
PR_AVG_TIME_TO_MERGE=$(grep -A 1 "Average time to close" pr_metrics.md | tail -1 | xargs || echo "N/A")
NUM_PRS_CREATED=$(grep -oP '\d+(?= pull requests created)' pr_metrics.md || echo "0")
NUM_PRS_MERGED=$(grep -oP '\d+(?= pull requests closed)' pr_metrics.md || echo "0")
else
echo "Warning: pr_metrics.md not found"
fi
# Set outputs for Discord action
echo "issues_created=${NUM_ISSUES_CREATED}" >> $GITHUB_OUTPUT
echo "issues_closed=${NUM_ISSUES_CLOSED}" >> $GITHUB_OUTPUT
echo "prs_created=${NUM_PRS_CREATED}" >> $GITHUB_OUTPUT
echo "prs_merged=${NUM_PRS_MERGED}" >> $GITHUB_OUTPUT
echo "avg_first_response=${AVG_TIME_TO_FIRST_RESPONSE}" >> $GITHUB_OUTPUT
echo "avg_time_to_close=${AVG_TIME_TO_CLOSE}" >> $GITHUB_OUTPUT
echo "pr_avg_merge_time=${PR_AVG_TIME_TO_MERGE}" >> $GITHUB_OUTPUT

Comment on lines +81 to +93

**🎯 Issues**
• Created: ${{ steps.metrics.outputs.issues_created }}
• Closed: ${{ steps.metrics.outputs.issues_closed }}

**🔀 Pull Requests**
• Created: ${{ steps.metrics.outputs.prs_created }}
• Merged: ${{ steps.metrics.outputs.prs_merged }}

**⏱️ Response Times**
• First Response: ${{ steps.metrics.outputs.avg_first_response }}
• Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
• PR Merge Time: ${{ steps.metrics.outputs.pr_avg_merge_time }}
Copy link
Contributor

@coderabbitai coderabbitai bot Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Fix YAML formatting issues

The Discord message description has trailing spaces on lines 81, 85, and 89 which violates YAML best practices.

Apply this diff to fix the formatting:

           description: |
             **${{ env.week_of }}**
-            
+
             **🎯 Issues**
             • Created: ${{ steps.metrics.outputs.issues_created }}
             • Closed: ${{ steps.metrics.outputs.issues_closed }}
-            
+
             **🔀 Pull Requests**
             • Created: ${{ steps.metrics.outputs.prs_created }}
             • Merged: ${{ steps.metrics.outputs.prs_merged }}
-            
+
             **⏱️ Response Times**
             • First Response: ${{ steps.metrics.outputs.avg_first_response }}
             • Time to Close: ${{ steps.metrics.outputs.avg_time_to_close }}
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 81-81: trailing spaces

(trailing-spaces)


[error] 85-85: trailing spaces

(trailing-spaces)


[error] 89-89: trailing spaces

(trailing-spaces)

🤖 Prompt for AI Agents
In .github/workflows/weekly-metrics-discord.yml around lines 81 to 93, the
Discord message block contains trailing spaces on lines 81, 85, and 89 which can
break YAML parsing; remove the trailing whitespace from those specific lines
(the lines ending after "Issues", "Pull Requests", and "Response Times" list
items), ensure each line ends immediately after the text (no extra spaces), and
re-save the file (optionally run your YAML linter or git diff to confirm no
trailing spaces remain).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which formatter from the project are you basing yourself off of. ignoring for now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@Crunchyman-ralph I checked the project configuration and I don't see any specific YAML formatter or linter configured in this repository. My comment was based on the YAMLlint tool that CodeRabbit uses internally for static analysis, not a project-specific configuration.

Since there's no project-specific YAML formatting setup, you're absolutely right to ignore this nitpick. The trailing spaces don't affect functionality and without a configured formatter in your project, there's no established standard to follow.

🐰💭


🏁 Script executed:

#!/bin/bash
# Look for YAML formatting/linting configuration files
echo "=== Searching for YAML linting configuration ==="
fd -t f "\.(yaml|yml)lint" .
fd -t f "\.yamllint" .
fd -t f "yamllint" .

echo ""
echo "=== Searching for general linting config files ==="
fd -t f "\.editorconfig" .
fd -t f "\.prettierrc" .
fd -t f "prettier.config" .
fd -t f "\.eslintrc" .

echo ""
echo "=== Checking package.json for YAML-related dependencies ==="
if [ -f "package.json" ]; then
  cat package.json | jq -r '.devDependencies // {}, .dependencies // {}' | grep -i yaml || echo "No YAML-related dependencies found"
fi

echo ""
echo "=== Checking for GitHub Actions workflow linting ==="
fd -t f "\.github" . | head -5
ls -la .github/workflows/ | head -10

echo ""
echo "=== Looking for any mention of yamllint in CI/workflows ==="
rg -i "yamllint" .github/

Length of output: 1715


@Crunchyman-ralph You're absolutely right to question this. After checking the project configuration, I can confirm there are no YAML formatters or linters configured in this repository:

  • No .yamllint configuration files
  • No YAML-related dependencies in package.json
  • No YAML linting mentioned in any GitHub Actions workflows
  • No general formatting configs like .editorconfig or .prettierrc

My original comment about "YAML best practices" was based on CodeRabbit's internal YAMLlint tool, not any project-specific standard. Since you haven't established YAML formatting rules for this project, the trailing spaces are purely cosmetic and your decision to ignore them makes perfect sense.

Thanks for pointing this out - it's a good reminder to distinguish between general tooling suggestions and actual project requirements.

🐰💭


✏️ Learnings added
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1132
File: .github/workflows/weekly-metrics-discord.yml:81-93
Timestamp: 2025-08-13T22:10:46.890Z
Learning: Crunchyman-ralph ignores YAML formatting nitpicks about trailing spaces when there's no project-specific YAML formatter configured, preferring to focus on functionality over cosmetic formatting issues.

Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1132
File: .github/workflows/weekly-metrics-discord.yml:81-93
Timestamp: 2025-08-13T22:10:46.890Z
Learning: Crunchyman-ralph ignores YAML formatting nitpicks about trailing spaces when there's no project-specific YAML formatter configured, preferring to focus on functionality over cosmetic formatting issues.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1069
File: .changeset/fix-tag-complexity-detection.md:0-0
Timestamp: 2025-08-02T15:33:22.656Z
Learning: For changeset files (.changeset/*.md), Crunchyman-ralph prefers to ignore formatting nitpicks about blank lines between frontmatter and descriptions, as he doesn't mind having them and wants to avoid such comments in future reviews.

Crunchyman-ralph and others added 2 commits August 14, 2025 00:06
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

♻️ Duplicate comments (2)
.github/scripts/auto-close-duplicates.mjs (2)

3-22: Add retry/backoff for transient failures and rate limits (429/5xx).

Repeatedly failing on transient network hiccups or rate limits will make this automation flaky. Add small, bounded retry with backoff and honor Retry-After when present.

Happy to re-share a compact version if/when you’re ready to harden this beyond POC.


52-57: Avoid hardcoding the bot name in the closure comment.

Make the bot name configurable to better fit different deployments.

Apply this diff in-place:

-🤖 Generated with [Task Master Bot]`
+🤖 Auto-closed by ${BOT_NAME}`

Add this at the top of the file (outside the selected range):

// near the top, after the shebang
const BOT_NAME = process.env.BOT_NAME || 'duplicate detection workflow';
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e13e78 and 99242af.

📒 Files selected for processing (1)
  • .github/scripts/auto-close-duplicates.mjs (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (2)
.github/scripts/auto-close-duplicates.mjs (2)

85-118: Pagination safety limit and early-stop look good.

Bumping MAX_PAGES and guarding early termination to page 1 addresses the previous pagination risk.


259-259: Confirm desired failure behavior for the workflow.

autoCloseDuplicates().catch(console.error); logs but exits 0; scheduled runs will appear green even on hard failures. If you want the job to fail on unhandled errors, set process.exitCode = 1 in the catch.

Proposed change (outside selected range):

autoCloseDuplicates().catch((err) => {
  console.error(err);
  process.exitCode = 1; // fail the step on unhandled errors
});

Do you want the automation to fail the workflow on errors, or always succeed with logs?

Comment on lines +6 to +11
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'auto-close-duplicates-script',
...(body && { 'Content-Type': 'application/json' })
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

⚠️ Potential issue

Ensure Reactions API works: include the 'squirrel-girl' Accept header (and pin API version).

Listing reactions reliably requires the reactions media type; without it, reactions may be missing or incomplete. Also consider pinning the REST API version.

Apply this diff to broaden Accept and pin the API version:

-			Accept: 'application/vnd.github.v3+json',
+			Accept: 'application/vnd.github+json, application/vnd.github.squirrel-girl+json',
+			'X-GitHub-Api-Version': '2022-11-28',

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 6 to 11, the Accept
header must include the reactions media type and pin the REST API version so
reactions are listed reliably; update the headers object to set Accept to
include the squirrel-girl preview media type plus the pinned API version (for
example include "application/vnd.github.squirrel-girl-preview+json" and a pinned
v3 media type like "application/vnd.github.v3+json;version=2022-11-28"), keep
the existing conditional Content-Type spread, and ensure Authorization and
User-Agent remain unchanged.

Comment on lines +24 to +27
function extractDuplicateIssueNumber(commentBody) {
const match = commentBody.match(/#(\d+)/);
return match ? parseInt(match[1], 10) : null;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Make duplicate issue extraction resilient to different formats.

Bot comments may link as "#123" or ".../issues/123", and may include multiple references; prefer specific patterns first, then fall back.

Apply this diff:

-function extractDuplicateIssueNumber(commentBody) {
-	const match = commentBody.match(/#(\d+)/);
-	return match ? parseInt(match[1], 10) : null;
-}
+function extractDuplicateIssueNumber(commentBody) {
+	const patterns = [
+		/duplicate(?:\s+of)?\s+#(\d+)/i,   // "duplicate of #123"
+		/duplicate(?:\s+of)?\s+.*?issues\/(\d+)/i, // "... issues/123"
+		/#(\d+)/,                          // generic "#123"
+		/issues\/(\d+)/                    // generic "issues/123"
+	];
+	for (const re of patterns) {
+		const m = String(commentBody || '').match(re);
+		if (m) return parseInt(m[1], 10);
+	}
+	return null;
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function extractDuplicateIssueNumber(commentBody) {
const match = commentBody.match(/#(\d+)/);
return match ? parseInt(match[1], 10) : null;
}
function extractDuplicateIssueNumber(commentBody) {
const patterns = [
/duplicate(?:\s+of)?\s+#(\d+)/i, // "duplicate of #123"
/duplicate(?:\s+of)?\s+.*?issues\/(\d+)/i, // "... issues/123"
/#(\d+)/, // generic "#123"
/issues\/(\d+)/ // generic "issues/123"
];
for (const re of patterns) {
const m = String(commentBody || '').match(re);
if (m) return parseInt(m[1], 10);
}
return null;
}
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 24 to 27, the
extractor only matches "#123" and does not handle full issue URLs or multiple
references; update the function to first search for issue-number patterns in
URLs (e.g. "/issues/123" or full GitHub issue URLs) and return the first numeric
ID found, then fall back to matching "#123" if no URL match exists, and ensure
it handles multiple matches by returning the first valid parseInt result or null
if none.

Comment on lines +36 to +59
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}`,
token,
'PATCH',
{
state: 'closed',
state_reason: 'not_planned',
labels: ['duplicate']
}
);

await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
token,
'POST',
{
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.

If this is incorrect, please re-open this issue or create a new one.

🤖 Generated with [Task Master Bot]`
}
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Do not clobber existing labels when closing; add the 'duplicate' label via the labels API.

PATCHing labels: ['duplicate'] replaces all labels, unintentionally discarding existing labels.

Apply this diff to avoid destructive label replacement:

 	await githubRequest(
 		`/repos/${owner}/${repo}/issues/${issueNumber}`,
 		token,
 		'PATCH',
 		{
 			state: 'closed',
-			state_reason: 'not_planned',
-			labels: ['duplicate']
+			state_reason: 'not_planned'
 		}
 	);
 
+	// Add 'duplicate' label without removing existing ones
+	await githubRequest(
+		`/repos/${owner}/${repo}/issues/${issueNumber}/labels`,
+		token,
+		'POST',
+		{ labels: ['duplicate'] }
+	);
+
 	await githubRequest(
 		`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
 		token,
 		'POST',
 		{
 			body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
 
 If this is incorrect, please re-open this issue or create a new one.
 
 🤖 Generated with [Task Master Bot]`
 		}
 	);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}`,
token,
'PATCH',
{
state: 'closed',
state_reason: 'not_planned',
labels: ['duplicate']
}
);
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
token,
'POST',
{
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
If this is incorrect, please re-open this issue or create a new one.
🤖 Generated with [Task Master Bot]`
}
);
}
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}`,
token,
'PATCH',
{
state: 'closed',
state_reason: 'not_planned'
}
);
// Add 'duplicate' label without removing existing ones
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}/labels`,
token,
'POST',
{ labels: ['duplicate'] }
);
await githubRequest(
`/repos/${owner}/${repo}/issues/${issueNumber}/comments`,
token,
'POST',
{
body: `This issue has been automatically closed as a duplicate of #${duplicateOfNumber}.
If this is incorrect, please re-open this issue or create a new one.
🤖 Generated with [Task Master Bot]`
}
);
}
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 36 to 59, the current
PATCH request sets labels: ['duplicate'] which overwrites all existing issue
labels; remove the labels field from the PATCH payload so you only change state
and state_reason, and instead call the issues labels API to add the duplicate
label (POST /repos/{owner}/{repo}/issues/{issue_number}/labels with
['duplicate']) after closing the issue so you don't clobber existing labels.

Comment on lines +70 to +73
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
console.log(`[DEBUG] Repository: ${owner}/${repo}`);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Derive owner/repo from GITHUB_REPOSITORY; avoid repo-specific defaults.

Defaulting to a specific repo is risky. Fall back to parsing GITHUB_REPOSITORY and fail fast if neither is present.

Apply this diff:

-	const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
-	const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
+	const repoSlug = process.env.GITHUB_REPOSITORY;
+	let owner = process.env.GITHUB_REPOSITORY_OWNER;
+	let repo = process.env.GITHUB_REPOSITORY_NAME;
+	if ((!owner || !repo) && repoSlug && repoSlug.includes('/')) {
+		[owner, repo] = repoSlug.split('/');
+	}
+	if (!owner || !repo) {
+		throw new Error('GITHUB_REPOSITORY or GITHUB_REPOSITORY_* env vars must be set');
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
const repoSlug = process.env.GITHUB_REPOSITORY;
let owner = process.env.GITHUB_REPOSITORY_OWNER;
let repo = process.env.GITHUB_REPOSITORY_NAME;
if ((!owner || !repo) && repoSlug && repoSlug.includes('/')) {
[owner, repo] = repoSlug.split('/');
}
if (!owner || !repo) {
throw new Error('GITHUB_REPOSITORY or GITHUB_REPOSITORY_* env vars must be set');
}
console.log(`[DEBUG] Repository: ${owner}/${repo}`);
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 70 to 73, the script
currently uses repo-specific hardcoded defaults for owner/repo; instead parse
GITHUB_REPOSITORY (format "owner/repo") into owner and repo, use
process.env.GITHUB_REPOSITORY_OWNER and GITHUB_REPOSITORY_NAME only if present,
and if neither a valid GITHUB_REPOSITORY nor the explicit env vars are
available, log an error and exit/fail fast; ensure trimming and simple
validation (split on '/' into two non-empty parts) before assigning owner/repo.

Comment on lines 86 to 87
let foundRecentIssue = false;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove unused variable.

foundRecentIssue is assigned but never used.

Apply this diff:

-	let foundRecentIssue = false;
-	
+	// no-op

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 86 to 87, the variable
"foundRecentIssue" is declared but never used; remove the unused declaration
(and any trailing semicolon/blank line if desired) so the code no longer defines
an unused variable and passes linting.

Comment on lines +126 to +131
for (const issue of issues) {
processedCount++;
console.log(
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Skip pull requests; this endpoint returns both issues and PRs.

/issues returns both. Closing PRs as “duplicate” would be harmful.

Apply this diff to skip PRs:

 	for (const issue of issues) {
 		processedCount++;
+		if (issue.pull_request) {
+			console.log(`[DEBUG] Skipping PR #${issue.number}`);
+			continue;
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const issue of issues) {
processedCount++;
console.log(
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
);
for (const issue of issues) {
processedCount++;
if (issue.pull_request) {
console.log(`[DEBUG] Skipping PR #${issue.number}`);
continue;
}
console.log(
`[DEBUG] Processing issue #${issue.number} (${processedCount}/${issues.length}): ${issue.title}`
);
🤖 Prompt for AI Agents
.github/scripts/auto-close-duplicates.mjs around lines 126 to 131: the script
iterates over issues returned by the /issues endpoint which includes pull
requests; modify the loop to detect and skip pull requests by checking for the
presence of the pull_request field on each item (or using issue.pull_request !==
undefined) and continue to the next item when present, ensuring PRs are not
processed or closed as duplicates and adjusting any counters or logs so skipped
PRs are not counted as processed.

Comment on lines +133 to +136
const comments = await githubRequest(
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
token
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Fetch more comments to reliably find the latest duplicate comment.

Default is 30; large threads may miss the latest bot comment.

Apply this diff:

-		const comments = await githubRequest(
-			`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
+		const comments = await githubRequest(
+			`/repos/${owner}/${repo}/issues/${issue.number}/comments?per_page=100`,
 			token
 		);

If you see issues in large threads, consider adding pagination for comments as a follow-up.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const comments = await githubRequest(
`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
token
);
const comments = await githubRequest(
`/repos/${owner}/${repo}/issues/${issue.number}/comments?per_page=100`,
token
);
🤖 Prompt for AI Agents
.github/scripts/auto-close-duplicates.mjs around lines 133 to 136: the call to
githubRequest fetching issue comments only uses the default 30 comments which
can miss the latest bot duplicate comment in large threads; update the request
to fetch more comments by adding a per_page query (e.g. ?per_page=100) or
implement proper pagination to retrieve all comment pages and then search those
results for the latest duplicate comment so the bot reliably finds its most
recent comment.

Comment on lines +141 to +147
const dupeComments = comments.filter(
(comment) =>
comment.body.includes('Found') &&
comment.body.includes('possible duplicate') &&
comment.user.type === 'Bot'
);
console.log(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Make duplicate-detection filtering more robust and case-insensitive.

Loosen casing and focus on “possible duplicate” to reduce false negatives.

Apply this diff:

-		const dupeComments = comments.filter(
-			(comment) =>
-				comment.body.includes('Found') &&
-				comment.body.includes('possible duplicate') &&
-				comment.user.type === 'Bot'
-		);
+		const dupeComments = comments.filter((comment) => {
+			const body = String(comment.body || '').toLowerCase();
+			return body.includes('possible duplicate') && comment.user?.type === 'Bot';
+		});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const dupeComments = comments.filter(
(comment) =>
comment.body.includes('Found') &&
comment.body.includes('possible duplicate') &&
comment.user.type === 'Bot'
);
console.log(
const dupeComments = comments.filter((comment) => {
const body = String(comment.body || '').toLowerCase();
return body.includes('possible duplicate') && comment.user?.type === 'Bot';
});
console.log(
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 141 to 147, the
duplicate-detection filter is too strict and case-sensitive; update it to
perform a case-insensitive check focused on the phrase "possible duplicate".
Replace the current comment.body.includes checks with a case-insensitive match
(for example convert comment.body to lowercase and check for 'possible
duplicate' or use a case-insensitive regex) while keeping the existing
comment.user.type === 'Bot' guard; this will reduce false negatives by matching
variants like "Possible Duplicate" or mixed case.

Comment on lines +197 to +203
const reactions = await githubRequest(
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
token
);
console.log(
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Guard against missing author reactions due to paging.

Fetch more reactions to avoid missing the author’s thumbs-down on busy threads.

Apply this diff:

-		const reactions = await githubRequest(
-			`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
+		const reactions = await githubRequest(
+			`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions?per_page=100`,
 			token
 		);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const reactions = await githubRequest(
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
token
);
console.log(
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
);
const reactions = await githubRequest(
`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions?per_page=100`,
token
);
console.log(
`[DEBUG] Issue #${issue.number} - duplicate comment has ${reactions.length} reactions`
);
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 197 to 203, the code
only fetches a single page of reactions which can miss the author's thumbs-down
on busy threads; update the fetch to paginate and accumulate all reaction pages
(e.g., request with per_page=100 and follow the Link header "next" until no next
page) so you gather the full reactions array before logging or checking for the
author reaction; ensure the accumulated reactions replace the single-page result
used downstream.

Comment on lines +220 to +228
const duplicateIssueNumber = extractDuplicateIssueNumber(
lastDupeComment.body
);
if (!duplicateIssueNumber) {
console.log(
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
);
continue;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid self-referential “duplicate of” closures.

Skip when the extracted duplicate number is the same as the current issue.

Apply this diff:

 		const duplicateIssueNumber = extractDuplicateIssueNumber(
 			lastDupeComment.body
 		);
 		if (!duplicateIssueNumber) {
 			console.log(
 				`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
 			);
 			continue;
 		}
+		if (duplicateIssueNumber === issue.number) {
+			console.log(
+				`[DEBUG] Issue #${issue.number} - extracted duplicate points to itself, skipping`
+			);
+			continue;
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const duplicateIssueNumber = extractDuplicateIssueNumber(
lastDupeComment.body
);
if (!duplicateIssueNumber) {
console.log(
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
);
continue;
}
const duplicateIssueNumber = extractDuplicateIssueNumber(
lastDupeComment.body
);
if (!duplicateIssueNumber) {
console.log(
`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
);
continue;
}
+ if (duplicateIssueNumber === issue.number) {
+ console.log(
+ `[DEBUG] Issue #${issue.number} - extracted duplicate points to itself, skipping`
+ );
+ continue;
+ }
🤖 Prompt for AI Agents
.github/scripts/auto-close-duplicates.mjs around lines 220 to 228: the code
currently skips when no duplicateIssueNumber is found but does not guard against
the case where the extracted duplicate number equals the current issue number
(self-referential duplicate); add a check after extraction that compares
duplicateIssueNumber to issue.number (ensuring both are same type or coerce to
Number) and if they are equal log a debug message and continue to skip closing
the issue.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (12)
.github/scripts/auto-close-duplicates.mjs (12)

6-11: Broaden Accept header and pin REST API version (required for Reactions API reliability).

Include the modern media type and pin the API version. This also enables reliable reactions listing.

-			Accept: 'application/vnd.github.v3+json',
+			Accept: 'application/vnd.github+json, application/vnd.github.squirrel-girl+json',
+			'X-GitHub-Api-Version': '2022-11-28',

3-22: Add retry logic with exponential backoff and Retry-After handling.

Transient network/5xx/429 failures will currently abort the whole run.

-async function githubRequest(endpoint, token, method = 'GET', body) {
-	const response = await fetch(`https://api.github.com${endpoint}`, {
-		method,
-		headers: {
-			Authorization: `Bearer ${token}`,
-			Accept: 'application/vnd.github.v3+json',
-			'User-Agent': 'auto-close-duplicates-script',
-			...(body && { 'Content-Type': 'application/json' })
-		},
-		...(body && { body: JSON.stringify(body) })
-	});
-
-	if (!response.ok) {
-		throw new Error(
-			`GitHub API request failed: ${response.status} ${response.statusText}`
-		);
-	}
-
-	return response.json();
+async function githubRequest(endpoint, token, method = 'GET', body, retries = 3) {
+	let lastError;
+	for (let attempt = 0; attempt <= retries; attempt++) {
+		if (attempt > 0) {
+			const base = Math.min(1000 * 2 ** (attempt - 1), 8000);
+			const jitter = Math.floor(Math.random() * 250);
+			const delay = (lastError?.retryAfterSec ? lastError.retryAfterSec * 1000 : base) + jitter;
+			console.log(`[DEBUG] Retry attempt ${attempt} for ${method} ${endpoint} in ${delay}ms`);
+			await new Promise((r) => setTimeout(r, delay));
+		}
+		try {
+			const response = await fetch(`https://api.github.com${endpoint}`, {
+				method,
+				headers: {
+					Authorization: `Bearer ${token}`,
+					Accept: 'application/vnd.github+json, application/vnd.github.squirrel-girl+json',
+					'X-GitHub-Api-Version': '2022-11-28',
+					'User-Agent': 'auto-close-duplicates-script',
+					...(body && { 'Content-Type': 'application/json' })
+				},
+				...(body && { body: JSON.stringify(body) })
+			});
+			if (!response.ok) {
+				const text = await response.text().catch(() => '');
+				const err = new Error(`GitHub API request failed: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`);
+				// Retry on 429 and 5xx
+				if (response.status === 429 || response.status >= 500) {
+					const ra = parseInt(response.headers.get('Retry-After') || '0', 10);
+					err.retryAfterSec = Number.isFinite(ra) ? ra : 0;
+					lastError = err;
+					continue;
+				}
+				throw err;
+			}
+			return await response.json();
+		} catch (e) {
+			// Retry network errors
+			if (e.name === 'FetchError' || e.name === 'TypeError') {
+				lastError = e;
+				continue;
+			}
+			throw e;
+		}
+	}
+	throw lastError || new Error('GitHub API request failed after retries');
 }

24-27: Make duplicate-issue extraction resilient to URLs and phrasing variants.

Handles “duplicate of #123”, full URLs, and generic references.

-function extractDuplicateIssueNumber(commentBody) {
-	const match = commentBody.match(/#(\d+)/);
-	return match ? parseInt(match[1], 10) : null;
-}
+function extractDuplicateIssueNumber(commentBody) {
+	const patterns = [
+		/duplicate(?:\s+of)?\s+#(\d+)/i,
+		/duplicate(?:\s+of)?\s+.*?issues\/(\d+)/i,
+		/#(\d+)/,
+		/issues\/(\d+)/
+	];
+	const body = String(commentBody || '');
+	for (const re of patterns) {
+		const m = body.match(re);
+		if (m) return parseInt(m[1], 10);
+	}
+	return null;
+}

41-45: Avoid clobbering existing labels when closing. Use the labels API to add ‘duplicate’.

PATCHing labels replaces all labels on the issue.

 		{
 			state: 'closed',
 			state_reason: 'not_planned',
-			labels: ['duplicate']
 		}
 	);
 
+	// Add 'duplicate' label without removing existing ones
+	await githubRequest(
+		`/repos/${owner}/${repo}/issues/${issueNumber}/labels`,
+		token,
+		'POST',
+		{ labels: ['duplicate'] }
+	);

52-57: Remove hardcoded bot branding in the closure comment.

Keep it generic to avoid mismatches across repos/environments.

 🤖 Generated with [Task Master Bot]`
+🤖 Auto-closed by duplicate detection workflow`

70-73: Derive owner/repo from GITHUB_REPOSITORY; remove repo-specific defaults.

Prevents accidental operation on the wrong repo.

-	const owner = process.env.GITHUB_REPOSITORY_OWNER || 'eyaltoledano';
-	const repo = process.env.GITHUB_REPOSITORY_NAME || 'claude-task-master';
-	console.log(`[DEBUG] Repository: ${owner}/${repo}`);
+	const repoSlug = process.env.GITHUB_REPOSITORY;
+	let owner = process.env.GITHUB_REPOSITORY_OWNER;
+	let repo = process.env.GITHUB_REPOSITORY_NAME;
+	if ((!owner || !repo) && repoSlug && repoSlug.includes('/')) {
+		[owner, repo] = repoSlug.split('/');
+	}
+	if (!owner || !repo) {
+		throw new Error('GITHUB_REPOSITORY or GITHUB_REPOSITORY_* env vars must be set');
+	}
+	console.log(`[DEBUG] Repository: ${owner}/${repo}`);

85-87: Remove unused variable.

foundRecentIssue is never read.

-	let foundRecentIssue = false;

126-131: Skip pull requests; /issues returns both issues and PRs.

Avoid closing PRs as duplicates.

 	for (const issue of issues) {
 		processedCount++;
+		if (issue.pull_request) {
+			console.log(`[DEBUG] Skipping PR #${issue.number}`);
+			continue;
+		}

133-136: Fetch more comments (or paginate) to find the latest duplicate comment.

The default 30 may miss the latest bot comment on busy threads.

-		const comments = await githubRequest(
-			`/repos/${owner}/${repo}/issues/${issue.number}/comments`,
+		const comments = await githubRequest(
+			`/repos/${owner}/${repo}/issues/${issue.number}/comments?per_page=100`,
 			token
 		);

141-147: Loosen and harden duplicate-detection filter; make it case-insensitive.

Reduce false negatives due to casing/wording.

-		const dupeComments = comments.filter(
-			(comment) =>
-				comment.body.includes('Found') &&
-				comment.body.includes('possible duplicate') &&
-				comment.user.type === 'Bot'
-		);
+		const dupeComments = comments.filter((comment) => {
+			const body = String(comment.body || '').toLowerCase();
+			return body.includes('possible duplicate') && comment.user?.type === 'Bot';
+		});

197-203: Guard against missing author reactions due to paging.

Request more reactions (or paginate) to avoid false positives.

-		const reactions = await githubRequest(
-			`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions`,
+		const reactions = await githubRequest(
+			`/repos/${owner}/${repo}/issues/comments/${lastDupeComment.id}/reactions?per_page=100`,
 			token
 		);

220-228: Avoid self-referential “duplicate of” closures.

Skip when the extracted duplicate number equals the current issue.

 		const duplicateIssueNumber = extractDuplicateIssueNumber(
 			lastDupeComment.body
 		);
 		if (!duplicateIssueNumber) {
 			console.log(
 				`[DEBUG] Issue #${issue.number} - could not extract duplicate issue number from comment, skipping`
 			);
 			continue;
 		}
+		if (duplicateIssueNumber === issue.number) {
+			console.log(
+				`[DEBUG] Issue #${issue.number} - extracted duplicate points to itself, skipping`
+			);
+			continue;
+		}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 99242af and 65e0be5.

📒 Files selected for processing (1)
  • .github/scripts/auto-close-duplicates.mjs (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md
🧬 Code Graph Analysis (1)
.github/scripts/auto-close-duplicates.mjs (2)
.github/scripts/backfill-duplicate-comments.mjs (12)
  • token (54-54)
  • response (4-13)
  • owner (68-68)
  • repo (69-69)
  • allIssues (83-83)
  • page (84-84)
  • perPage (85-85)
  • pageIssues (88-91)
  • processedCount (109-109)
  • candidateCount (110-110)
  • comments (120-123)
  • issueUrl (149-149)
scripts/modules/ai-services-unified.js (1)
  • body (244-244)

Comment on lines +1 to +22
#!/usr/bin/env node

async function githubRequest(endpoint, token, method = 'GET', body) {
const response = await fetch(`https://api.github.com${endpoint}`, {
method,
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'auto-close-duplicates-script',
...(body && { 'Content-Type': 'application/json' })
},
...(body && { body: JSON.stringify(body) })
});

if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}

return response.json();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Deduplicate shared logic across scripts (githubRequest, owner/repo resolution).

backfill-duplicate-comments.mjs duplicates request/header/repo logic. Extract a small shared helper (e.g., .github/scripts/lib/github.js) to keep headers/retry/versioning consistent across automations.

🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 1-22, the
githubRequest and repo/header logic is duplicated in
backfill-duplicate-comments.mjs; extract this shared logic into a new module
(e.g., .github/scripts/lib/github.js or .mjs) that exports a reusable
githubRequest (and helper to resolve owner/repo from env/args), centralize
headers (Authorization, Accept, User-Agent, Content-Type), and any
retry/versioning behavior; then update both scripts to import that helper and
remove the duplicated code so both use the single shared function with
consistent headers and error handling (use relative import paths, preserve
existing function signature, and ensure callers JSON.stringify body when
needed).

Comment on lines +15 to +19
if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Improve error diagnostics by including response body.

Surface API error payloads to speed up debugging (rate limits, validation errors, etc.).

-	if (!response.ok) {
-		throw new Error(
-			`GitHub API request failed: ${response.status} ${response.statusText}`
-		);
-	}
+	if (!response.ok) {
+		const text = await response.text().catch(() => '');
+		throw new Error(
+			`GitHub API request failed: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`
+		);
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!response.ok) {
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}`
);
}
if (!response.ok) {
const text = await response.text().catch(() => '');
throw new Error(
`GitHub API request failed: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`
);
}
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 15 to 19, the thrown
error for non-ok responses lacks the response body; read the response payload
(await response.text() or attempt JSON parse) before throwing and include that
payload in the Error message (optionally truncating long bodies) so the thrown
error contains status, statusText and the response body for better diagnostics.

Comment on lines +103 to +107
// If all issues on this page are newer than 3 days, we can stop
if (oldEnoughIssues.length === 0 && page === 1) {
foundRecentIssue = true;
break;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Fix pagination: early break can miss older issues beyond page 1.

If there are >100 issues newer than 3 days, page 1 contains only “too new” results; breaking here skips older issues on subsequent pages.

-		// If all issues on this page are newer than 3 days, we can stop
-		if (oldEnoughIssues.length === 0 && page === 1) {
-			foundRecentIssue = true;
-			break;
-		}
+		// Keep paginating; older issues may exist on subsequent pages
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// If all issues on this page are newer than 3 days, we can stop
if (oldEnoughIssues.length === 0 && page === 1) {
foundRecentIssue = true;
break;
}
// Filter for issues created more than 3 days ago
const oldEnoughIssues = pageIssues.filter(
(issue) => new Date(issue.created_at) <= threeDaysAgo
);
allIssues.push(...oldEnoughIssues);
// Keep paginating; older issues may exist on subsequent pages
page++;
// Safety limit to avoid infinite loops
if (page > MAX_PAGES) {
console.log(`[WARNING] Reached maximum page limit of ${MAX_PAGES}`);
break;
}
🤖 Prompt for AI Agents
In .github/scripts/auto-close-duplicates.mjs around lines 103-107, the current
early break when oldEnoughIssues.length === 0 && page === 1 skips older issues
on subsequent pages; change the logic so that if oldEnoughIssues.length === 0
you do NOT set foundRecentIssue or break — instead continue to the next page (or
stop only when there are no more pages/results). Only set foundRecentIssue and
break when you actually detect recent issues on the current page; otherwise keep
paginating until results are exhausted.

@Crunchyman-ralph Crunchyman-ralph merged commit 5d94f1b into next Aug 13, 2025
22 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Aug 13, 2025
16 tasks
stephanschielke pushed a commit to stephanschielke/cursor-task-master that referenced this pull request Aug 22, 2025
* chore: add a bunch of automations

* chore: run format

* Update .github/scripts/auto-close-duplicates.mjs

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: run format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
stephanschielke pushed a commit to stephanschielke/cursor-task-master that referenced this pull request Aug 22, 2025
* chore: add a bunch of automations

* chore: run format

* Update .github/scripts/auto-close-duplicates.mjs

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: run format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
davidgilady pushed a commit to davidgilady/claude-task-master that referenced this pull request Sep 23, 2025
* chore: add a bunch of automations

* chore: run format

* Update .github/scripts/auto-close-duplicates.mjs

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: run format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
sfc-gh-dflippo pushed a commit to sfc-gh-dflippo/task-master-ai that referenced this pull request Dec 4, 2025
* chore: add a bunch of automations

* chore: run format

* Update .github/scripts/auto-close-duplicates.mjs

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: run format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant