Skip to content

chore: add project AGENTS and quality command#1970

Merged
alexeyv merged 2 commits intomainfrom
chore/project-agents-quality
Mar 14, 2026
Merged

chore: add project AGENTS and quality command#1970
alexeyv merged 2 commits intomainfrom
chore/project-agents-quality

Conversation

@alexeyv
Copy link
Copy Markdown
Collaborator

@alexeyv alexeyv commented Mar 14, 2026

Summary

  • add a minimal project-level AGENTS.md for BMAD-METHOD
  • add npm run quality as a local aggregate quality gate
  • note that the quality workflow and local quality command should stay aligned

Testing

  • npm ci && npm run quality

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Mar 14, 2026

🤖 Augment PR Summary

Summary: Adds a lightweight project-level agent guide and a single local “quality gate” command to match CI.

Changes:

  • Introduced AGENTS.md with repo contribution/verification expectations
  • Added npm run quality aggregating the same checks run in .github/workflows/quality.yaml, and documented the alignment in the workflow

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. No suggestions at this time.

Comment augment review to trigger a new review at any time.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 14, 2026

📝 Walkthrough

Walkthrough

This PR introduces quality assurance tooling by adding a "quality" npm script that chains multiple validation and testing commands, documenting the BMAD-METHOD framework in a new AGENTS.md file, and adding a comment to the quality workflow to maintain alignment between CI configuration and npm scripts.

Changes

Cohort / File(s) Summary
Quality Tooling Setup
.github/workflows/quality.yaml, package.json
Added comment to workflow and introduced new "quality" npm script that chains format checking, linting, markdown linting, docs building, schema validation, schema testing, installation testing, and reference validation.
Documentation
AGENTS.md
New documentation file introducing the BMAD-METHOD framework with guidelines for structured agent-assisted software delivery, including Conventional Commits usage and quality check requirements.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • bmadcode
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main changes: adding AGENTS.md documentation and a quality npm command.
Description check ✅ Passed The description clearly outlines the three main objectives and provides testing instructions, all directly related to the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/project-agents-quality
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
package.json (1)

42-44: Overlap between quality and test scripts creates ambiguity.

Both scripts run overlapping checks in different orders:

  • quality: format:check → lint → lint:md → docs:build → validate:schemas → test:schemas → test:install → validate:refs
  • test: test:schemas → test:refs → test:install → validate:schemas → lint → lint:md → format:check

With two similar-but-different aggregate scripts, contributors may be unsure which to run. AGENTS.md says to run quality, but npm test is a standard convention. Consider either:

  1. Documenting when to use each, or
  2. Making test delegate to quality (or vice versa)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` around lines 42 - 44, The package.json defines two overlapping
aggregate scripts ("quality" and "test") with different orders causing
ambiguity; pick one canonical workflow and have the other delegate to it to
avoid divergence: update the "test" script to simply run "npm run quality" (or
conversely make "quality" run "npm test") so both commands execute the exact
same tasks and order, and adjust AGENTS.md to reference the canonical script
name ("quality" or "test") accordingly; locate the scripts by their names
"quality" and "test" in package.json to make the change.
AGENTS.md (2)

7-7: Conventional Commits rule lacks actionable guidance.

The rule references "Conventional Commits" but provides no link to the specification, allowed types (feat, fix, chore, etc.), or scope conventions used in this project. Contributors (human or AI) have no way to know what's valid.

Consider adding a link: [Conventional Commits](https://www.conventionalcommits.org/)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` at line 7, Update the "Use Conventional Commits for every commit."
rule in AGENTS.md to be actionable by adding a link to the Conventional Commits
spec and enumerating the allowed commit types and scope convention used in this
repo (e.g., list common types like feat, fix, docs, chore, refactor, test, perf
and the expected scope format such as <scope>: subject or module/component
names), and include an example commit message showing type(scope): short
description and a link to https://www.conventionalcommits.org/ for full details.

8-9: Behavioral difference between local quality and CI is undocumented.

The quality script runs checks sequentially (stops on first failure), whereas the CI workflow runs them in parallel jobs (all failures surface at once). This semantic difference affects the developer experience: locally you fix issues one-by-one; in CI you see everything at once.

This may be intentional, but the documentation implies they're equivalent ("mirrors the checks"). Consider noting this distinction or explaining the trade-off.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` around lines 8 - 9, Update AGENTS.md to note the behavioral
difference between the local "quality" script and CI: state that the documented
command `npm ci && npm run quality` runs checks sequentially (stops at first
failure) while the CI defined in ".github/workflows/quality.yaml" runs checks in
parallel jobs (reports all failures at once), and add a brief recommendation
(e.g., run the CI workflow locally or add a script option that runs all checks
and aggregates failures) so developers understand the trade-off and how to
reproduce CI behavior locally.
.github/workflows/quality.yaml (3)

13-16: No push trigger—quality checks won't run on direct commits to protected branches.

The workflow triggers on pull_request and workflow_dispatch only. If someone pushes directly to main (admins, automation, or if branch protection allows), quality checks won't run. This could allow regressions to land undetected.

Consider adding:

on:
  push:
    branches: [main]
  pull_request:
    branches: ["**"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/quality.yaml around lines 13 - 16, The workflow currently
triggers only on the "pull_request" and "workflow_dispatch" events, so add a
"push" trigger to ensure checks run on direct commits: update the top-level "on"
block to include a "push" entry (e.g., push: branches: [main]) alongside the
existing pull_request and workflow_dispatch entries so the workflow triggers on
direct pushes to main as well as PRs and manual runs.

18-117: CI jobs run in parallel but npm script runs sequentially—feedback loops differ.

The five workflow jobs (prettier, eslint, markdownlint, docs, validate) run in parallel, surfacing all failures at once. The local npm run quality runs the same checks sequentially with &&, stopping at the first failure.

This is a valid design choice (fail-fast locally, comprehensive feedback in CI), but the alignment comment on line 11 implies equivalence. If this asymmetry is intentional, consider documenting it. If not, you could add needs: dependencies to serialize CI, or use a parallel runner locally (e.g., npm-run-all --parallel).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/quality.yaml around lines 18 - 117, The workflow runs the
jobs prettier, eslint, markdownlint, docs and validate in parallel while the
local npm run quality runs those checks sequentially, so update the workflow to
match the intended behavior: either (A) document the asymmetry by editing the
comment near line 11 to state that CI intentionally runs jobs in parallel while
local npm runs sequentially, or (B) make CI run sequentially by adding needs
dependencies (e.g., make eslint need: prettier, markdownlint need: eslint, docs
need: markdownlint, validate need: docs) to serialize execution, or (C) change
the npm script to run checks in parallel (e.g., use npm-run-all --parallel) so
CI and local tooling align; choose one approach and apply it consistently across
the jobs named prettier, eslint, markdownlint, docs and validate.

11-11: Alignment comment has no enforcement mechanism.

The comment asks maintainers to "keep this workflow aligned" with the npm script, but there's no automated check to detect drift. If someone adds a step to the workflow and forgets to update package.json (or vice versa), the misalignment will go unnoticed.

Consider adding a CI step that programmatically verifies the workflow steps match the npm script, or at minimum add a checklist item to the PR template.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/quality.yaml at line 11, The comment notes there's no
enforcement that the .github/workflows/quality.yaml steps stay aligned with the
"quality" npm script; add an automated check job (e.g., "verify-quality-sync")
to that workflow which runs at PR/CI time and programmatically compares
package.json's "scripts.quality" content to the workflow's job/step list
(implement as a small Node or shell script invoked by the job that fails the
build on mismatch), or alternatively add a PR template checklist entry to remind
authors to update both; reference the workflow name/anchor in
.github/workflows/quality.yaml and the "quality" npm script in package.json when
implementing the check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/quality.yaml:
- Line 10: The comment states the workflow mentions "Bundle validation (web
bundle integrity)" but doesn't run any validation; either remove that comment or
add a step to run the existing rebundle/validation script. Fix by updating
.github/workflows/quality.yaml: either delete or adjust the descriptive comment
line mentioning "Bundle validation (web bundle integrity)" if you don't intend
to run validation, or add a job/step that invokes the package.json "rebundle"
script (e.g., a step that runs npm run rebundle or yarn rebundle) and ensure the
step name clearly references bundle validation so it matches the comment and the
quality pipeline actually executes the rebundle check.

In `@AGENTS.md`:
- Around line 8-9: Add a prerequisite note to AGENTS.md stating the required
Node.js version (node >= 20.0.0) before the npm steps so contributors run `npm
ci && npm run quality` with a compatible runtime; reference that this
requirement comes from package.json's "engines" field and suggest running `node
--version` or upgrading Node if older to avoid cryptic failures when executing
`npm ci` and `npm run quality`.

---

Nitpick comments:
In @.github/workflows/quality.yaml:
- Around line 13-16: The workflow currently triggers only on the "pull_request"
and "workflow_dispatch" events, so add a "push" trigger to ensure checks run on
direct commits: update the top-level "on" block to include a "push" entry (e.g.,
push: branches: [main]) alongside the existing pull_request and
workflow_dispatch entries so the workflow triggers on direct pushes to main as
well as PRs and manual runs.
- Around line 18-117: The workflow runs the jobs prettier, eslint, markdownlint,
docs and validate in parallel while the local npm run quality runs those checks
sequentially, so update the workflow to match the intended behavior: either (A)
document the asymmetry by editing the comment near line 11 to state that CI
intentionally runs jobs in parallel while local npm runs sequentially, or (B)
make CI run sequentially by adding needs dependencies (e.g., make eslint need:
prettier, markdownlint need: eslint, docs need: markdownlint, validate need:
docs) to serialize execution, or (C) change the npm script to run checks in
parallel (e.g., use npm-run-all --parallel) so CI and local tooling align;
choose one approach and apply it consistently across the jobs named prettier,
eslint, markdownlint, docs and validate.
- Line 11: The comment notes there's no enforcement that the
.github/workflows/quality.yaml steps stay aligned with the "quality" npm script;
add an automated check job (e.g., "verify-quality-sync") to that workflow which
runs at PR/CI time and programmatically compares package.json's
"scripts.quality" content to the workflow's job/step list (implement as a small
Node or shell script invoked by the job that fails the build on mismatch), or
alternatively add a PR template checklist entry to remind authors to update
both; reference the workflow name/anchor in .github/workflows/quality.yaml and
the "quality" npm script in package.json when implementing the check.

In `@AGENTS.md`:
- Line 7: Update the "Use Conventional Commits for every commit." rule in
AGENTS.md to be actionable by adding a link to the Conventional Commits spec and
enumerating the allowed commit types and scope convention used in this repo
(e.g., list common types like feat, fix, docs, chore, refactor, test, perf and
the expected scope format such as <scope>: subject or module/component names),
and include an example commit message showing type(scope): short description and
a link to https://www.conventionalcommits.org/ for full details.
- Around line 8-9: Update AGENTS.md to note the behavioral difference between
the local "quality" script and CI: state that the documented command `npm ci &&
npm run quality` runs checks sequentially (stops at first failure) while the CI
defined in ".github/workflows/quality.yaml" runs checks in parallel jobs
(reports all failures at once), and add a brief recommendation (e.g., run the CI
workflow locally or add a script option that runs all checks and aggregates
failures) so developers understand the trade-off and how to reproduce CI
behavior locally.

In `@package.json`:
- Around line 42-44: The package.json defines two overlapping aggregate scripts
("quality" and "test") with different orders causing ambiguity; pick one
canonical workflow and have the other delegate to it to avoid divergence: update
the "test" script to simply run "npm run quality" (or conversely make "quality"
run "npm test") so both commands execute the exact same tasks and order, and
adjust AGENTS.md to reference the canonical script name ("quality" or "test")
accordingly; locate the scripts by their names "quality" and "test" in
package.json to make the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d694604d-ea81-4440-85e0-13ed6cc488ea

📥 Commits

Reviewing files that changed from the base of the PR and between d2f15ef and 8486fca.

📒 Files selected for processing (3)
  • .github/workflows/quality.yaml
  • AGENTS.md
  • package.json

@alexeyv alexeyv force-pushed the chore/project-agents-quality branch from a425de8 to 9f81bb3 Compare March 14, 2026 06:25
@alexeyv alexeyv merged commit 405fd93 into main Mar 14, 2026
5 checks passed
@alexeyv alexeyv deleted the chore/project-agents-quality branch March 14, 2026 06:26
alexeyv added a commit that referenced this pull request Mar 14, 2026
* chore: add project AGENTS and quality command

* chore: remove stale bundle validation note
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant