Skip to content

Detect spam README edits with AI. Protect your repository from promotional links and low-quality documentation PRs.

License

Notifications You must be signed in to change notification settings

rbadillap/ai-readme-antispam

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

AI README Antispam

A GitHub Action to detect spammy README edits using AI to protect your repository from spam pull requests.

GitHub Marketplace

GitHub Repository

spammy-readme-edits.mp4

Features

  • AI-Powered Detection: Uses advanced AI models to analyze README changes
  • Smart Classification: Categorizes changes as spam, unknown, or legitimate
  • Flexible Integration: Use outputs to comment, label, or fail workflows
  • Fast & Reliable: Built on Vercel AI Action infrastructure

How It Works

  1. Detects README changes: Extracts README file modifications from the current PR
  2. Analyzes with AI: Uses AI to understand the context and intent of changes
  3. Returns results: Provides structured outputs for you to act upon

The action identifies spam as:

  • Promotional or irrelevant link additions
  • Trivial changes without real value
  • SEO link farming attempts

It recognizes legitimate changes as:

  • Typo fixes
  • Documentation improvements
  • Technical examples and guides

Inputs

Input Required Default Description
api-key Yes - API key for AI Gateway (get one at vercel.com/ai-gateway)
github-token No ${{ github.token }} GITHUB_TOKEN (issues: write, pull-requests: write) or a repo scoped PAT
model No openai/gpt-4o AI model to use (see supported models)

Outputs

Output Description
spam-type Type of spam detected: spam, unknown, or none
analysis-reason Detailed reason for the classification
raw-json Full JSON response from AI analysis

Usage

Basic Example

name: Spam Detection
on:
  pull_request:
    paths:
      - '**/README*'

jobs:
  detect-spam:
    runs-on: ubuntu-latest
    steps:
      - uses: rbadillap/ai-readme-antispam@v1
        id: spam-check
        with:
          api-key: ${{ secrets.AI_GATEWAY_API_KEY }}
          
      - run: |
          echo "Type: ${{ steps.spam-check.outputs.spam-type }}"
          echo "Reason: ${{ steps.spam-check.outputs.analysis-reason }}"

Advanced Examples

Auto-comment on spam detection

- uses: rbadillap/ai-readme-antispam@v1
  id: spam-check
  with:
    api-key: ${{ secrets.AI_GATEWAY_API_KEY }}

- uses: actions/github-script@v7
  if: steps.spam-check.outputs.spam-type == 'spam'
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    script: |
      github.rest.issues.createComment({
        issue_number: context.issue.number,
        owner: context.repo.owner,
        repo: context.repo.repo,
        body: "The changes to the README in this PR may not align with the project documentation standards. Please review and ensure the modifications add meaningful value to the project.\n\n" +
              "Reason: ${{ steps.spam-check.outputs.analysis-reason }}\n\n" +
              "Feel free to update the PR if needed!"
      })

Auto-close PR on spam detection

- uses: rbadillap/ai-readme-antispam@v1
  id: spam-check
  with:
    api-key: ${{ secrets.AI_GATEWAY_API_KEY }}

- uses: actions/github-script@v7
  if: steps.spam-check.outputs.spam-type == 'spam'
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    script: |
      await github.rest.issues.createComment({
        issue_number: context.issue.number,
        owner: context.repo.owner,
        repo: context.repo.repo,
        body: "This PR may have been opened accidentally or contains changes that don't align with our documentation guidelines. I'm closing it now, but feel free to open a new PR with more substantial documentation improvements!\n\n" +
              "Reason: ${{ steps.spam-check.outputs.analysis-reason }}"
      })
      
      await github.rest.pulls.update({
        owner: context.repo.owner,
        repo: context.repo.repo,
        pull_number: context.issue.number,
        state: 'closed'
      })

Fail workflow on spam

- uses: rbadillap/ai-readme-antispam@v1
  id: spam-check
  with:
    api-key: ${{ secrets.AI_GATEWAY_API_KEY }}

- name: Fail if spam detected
  if: steps.spam-check.outputs.spam-type == 'spam'
  run: |
    echo "::error::README changes may not meet documentation standards: ${{ steps.spam-check.outputs.analysis-reason }}"
    exit 1

Label PRs automatically

- uses: rbadillap/ai-readme-antispam@v1
  id: spam-check
  with:
    api-key: ${{ secrets.AI_GATEWAY_API_KEY }}

- uses: actions/github-script@v7
  if: steps.spam-check.outputs.spam-type == 'spam'
  with:
    github-token: ${{ secrets.GITHUB_TOKEN }}
    script: |
      github.rest.issues.addLabels({
        issue_number: context.issue.number,
        owner: context.repo.owner,
        repo: context.repo.repo,
        labels: ['spam', 'needs-review']
      })

Live Example

See this action in action: rbadillap/test-readme-antispam - A live repository using this action to automatically detect and close spam PRs.

License

MIT © Ronny Badilla

Support

About

Detect spam README edits with AI. Protect your repository from promotional links and low-quality documentation PRs.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published