chore: add project AGENTS and quality command#1970
Conversation
🤖 Augment PR SummarySummary: Adds a lightweight project-level agent guide and a single local “quality gate” command to match CI. Changes:
🤖 Was this summary useful? React with 👍 or 👎 |
📝 WalkthroughWalkthroughThis PR introduces quality assurance tooling by adding a "quality" npm script that chains multiple validation and testing commands, documenting the BMAD-METHOD framework in a new AGENTS.md file, and adding a comment to the quality workflow to maintain alignment between CI configuration and npm scripts. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
package.json (1)
42-44: Overlap betweenqualityandtestscripts creates ambiguity.Both scripts run overlapping checks in different orders:
quality: format:check → lint → lint:md → docs:build → validate:schemas → test:schemas → test:install → validate:refstest: test:schemas → test:refs → test:install → validate:schemas → lint → lint:md → format:checkWith two similar-but-different aggregate scripts, contributors may be unsure which to run. AGENTS.md says to run
quality, butnpm testis a standard convention. Consider either:
- Documenting when to use each, or
- Making
testdelegate toquality(or vice versa)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 42 - 44, The package.json defines two overlapping aggregate scripts ("quality" and "test") with different orders causing ambiguity; pick one canonical workflow and have the other delegate to it to avoid divergence: update the "test" script to simply run "npm run quality" (or conversely make "quality" run "npm test") so both commands execute the exact same tasks and order, and adjust AGENTS.md to reference the canonical script name ("quality" or "test") accordingly; locate the scripts by their names "quality" and "test" in package.json to make the change.AGENTS.md (2)
7-7: Conventional Commits rule lacks actionable guidance.The rule references "Conventional Commits" but provides no link to the specification, allowed types (
feat,fix,chore, etc.), or scope conventions used in this project. Contributors (human or AI) have no way to know what's valid.Consider adding a link:
[Conventional Commits](https://www.conventionalcommits.org/)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` at line 7, Update the "Use Conventional Commits for every commit." rule in AGENTS.md to be actionable by adding a link to the Conventional Commits spec and enumerating the allowed commit types and scope convention used in this repo (e.g., list common types like feat, fix, docs, chore, refactor, test, perf and the expected scope format such as <scope>: subject or module/component names), and include an example commit message showing type(scope): short description and a link to https://www.conventionalcommits.org/ for full details.
8-9: Behavioral difference between localqualityand CI is undocumented.The
qualityscript runs checks sequentially (stops on first failure), whereas the CI workflow runs them in parallel jobs (all failures surface at once). This semantic difference affects the developer experience: locally you fix issues one-by-one; in CI you see everything at once.This may be intentional, but the documentation implies they're equivalent ("mirrors the checks"). Consider noting this distinction or explaining the trade-off.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 8 - 9, Update AGENTS.md to note the behavioral difference between the local "quality" script and CI: state that the documented command `npm ci && npm run quality` runs checks sequentially (stops at first failure) while the CI defined in ".github/workflows/quality.yaml" runs checks in parallel jobs (reports all failures at once), and add a brief recommendation (e.g., run the CI workflow locally or add a script option that runs all checks and aggregates failures) so developers understand the trade-off and how to reproduce CI behavior locally..github/workflows/quality.yaml (3)
13-16: Nopushtrigger—quality checks won't run on direct commits to protected branches.The workflow triggers on
pull_requestandworkflow_dispatchonly. If someone pushes directly tomain(admins, automation, or if branch protection allows), quality checks won't run. This could allow regressions to land undetected.Consider adding:
on: push: branches: [main] pull_request: branches: ["**"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/quality.yaml around lines 13 - 16, The workflow currently triggers only on the "pull_request" and "workflow_dispatch" events, so add a "push" trigger to ensure checks run on direct commits: update the top-level "on" block to include a "push" entry (e.g., push: branches: [main]) alongside the existing pull_request and workflow_dispatch entries so the workflow triggers on direct pushes to main as well as PRs and manual runs.
18-117: CI jobs run in parallel but npm script runs sequentially—feedback loops differ.The five workflow jobs (
prettier,eslint,markdownlint,docs,validate) run in parallel, surfacing all failures at once. The localnpm run qualityruns the same checks sequentially with&&, stopping at the first failure.This is a valid design choice (fail-fast locally, comprehensive feedback in CI), but the alignment comment on line 11 implies equivalence. If this asymmetry is intentional, consider documenting it. If not, you could add
needs:dependencies to serialize CI, or use a parallel runner locally (e.g.,npm-run-all --parallel).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/quality.yaml around lines 18 - 117, The workflow runs the jobs prettier, eslint, markdownlint, docs and validate in parallel while the local npm run quality runs those checks sequentially, so update the workflow to match the intended behavior: either (A) document the asymmetry by editing the comment near line 11 to state that CI intentionally runs jobs in parallel while local npm runs sequentially, or (B) make CI run sequentially by adding needs dependencies (e.g., make eslint need: prettier, markdownlint need: eslint, docs need: markdownlint, validate need: docs) to serialize execution, or (C) change the npm script to run checks in parallel (e.g., use npm-run-all --parallel) so CI and local tooling align; choose one approach and apply it consistently across the jobs named prettier, eslint, markdownlint, docs and validate.
11-11: Alignment comment has no enforcement mechanism.The comment asks maintainers to "keep this workflow aligned" with the npm script, but there's no automated check to detect drift. If someone adds a step to the workflow and forgets to update
package.json(or vice versa), the misalignment will go unnoticed.Consider adding a CI step that programmatically verifies the workflow steps match the npm script, or at minimum add a checklist item to the PR template.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/quality.yaml at line 11, The comment notes there's no enforcement that the .github/workflows/quality.yaml steps stay aligned with the "quality" npm script; add an automated check job (e.g., "verify-quality-sync") to that workflow which runs at PR/CI time and programmatically compares package.json's "scripts.quality" content to the workflow's job/step list (implement as a small Node or shell script invoked by the job that fails the build on mismatch), or alternatively add a PR template checklist entry to remind authors to update both; reference the workflow name/anchor in .github/workflows/quality.yaml and the "quality" npm script in package.json when implementing the check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/quality.yaml:
- Line 10: The comment states the workflow mentions "Bundle validation (web
bundle integrity)" but doesn't run any validation; either remove that comment or
add a step to run the existing rebundle/validation script. Fix by updating
.github/workflows/quality.yaml: either delete or adjust the descriptive comment
line mentioning "Bundle validation (web bundle integrity)" if you don't intend
to run validation, or add a job/step that invokes the package.json "rebundle"
script (e.g., a step that runs npm run rebundle or yarn rebundle) and ensure the
step name clearly references bundle validation so it matches the comment and the
quality pipeline actually executes the rebundle check.
In `@AGENTS.md`:
- Around line 8-9: Add a prerequisite note to AGENTS.md stating the required
Node.js version (node >= 20.0.0) before the npm steps so contributors run `npm
ci && npm run quality` with a compatible runtime; reference that this
requirement comes from package.json's "engines" field and suggest running `node
--version` or upgrading Node if older to avoid cryptic failures when executing
`npm ci` and `npm run quality`.
---
Nitpick comments:
In @.github/workflows/quality.yaml:
- Around line 13-16: The workflow currently triggers only on the "pull_request"
and "workflow_dispatch" events, so add a "push" trigger to ensure checks run on
direct commits: update the top-level "on" block to include a "push" entry (e.g.,
push: branches: [main]) alongside the existing pull_request and
workflow_dispatch entries so the workflow triggers on direct pushes to main as
well as PRs and manual runs.
- Around line 18-117: The workflow runs the jobs prettier, eslint, markdownlint,
docs and validate in parallel while the local npm run quality runs those checks
sequentially, so update the workflow to match the intended behavior: either (A)
document the asymmetry by editing the comment near line 11 to state that CI
intentionally runs jobs in parallel while local npm runs sequentially, or (B)
make CI run sequentially by adding needs dependencies (e.g., make eslint need:
prettier, markdownlint need: eslint, docs need: markdownlint, validate need:
docs) to serialize execution, or (C) change the npm script to run checks in
parallel (e.g., use npm-run-all --parallel) so CI and local tooling align;
choose one approach and apply it consistently across the jobs named prettier,
eslint, markdownlint, docs and validate.
- Line 11: The comment notes there's no enforcement that the
.github/workflows/quality.yaml steps stay aligned with the "quality" npm script;
add an automated check job (e.g., "verify-quality-sync") to that workflow which
runs at PR/CI time and programmatically compares package.json's
"scripts.quality" content to the workflow's job/step list (implement as a small
Node or shell script invoked by the job that fails the build on mismatch), or
alternatively add a PR template checklist entry to remind authors to update
both; reference the workflow name/anchor in .github/workflows/quality.yaml and
the "quality" npm script in package.json when implementing the check.
In `@AGENTS.md`:
- Line 7: Update the "Use Conventional Commits for every commit." rule in
AGENTS.md to be actionable by adding a link to the Conventional Commits spec and
enumerating the allowed commit types and scope convention used in this repo
(e.g., list common types like feat, fix, docs, chore, refactor, test, perf and
the expected scope format such as <scope>: subject or module/component names),
and include an example commit message showing type(scope): short description and
a link to https://www.conventionalcommits.org/ for full details.
- Around line 8-9: Update AGENTS.md to note the behavioral difference between
the local "quality" script and CI: state that the documented command `npm ci &&
npm run quality` runs checks sequentially (stops at first failure) while the CI
defined in ".github/workflows/quality.yaml" runs checks in parallel jobs
(reports all failures at once), and add a brief recommendation (e.g., run the CI
workflow locally or add a script option that runs all checks and aggregates
failures) so developers understand the trade-off and how to reproduce CI
behavior locally.
In `@package.json`:
- Around line 42-44: The package.json defines two overlapping aggregate scripts
("quality" and "test") with different orders causing ambiguity; pick one
canonical workflow and have the other delegate to it to avoid divergence: update
the "test" script to simply run "npm run quality" (or conversely make "quality"
run "npm test") so both commands execute the exact same tasks and order, and
adjust AGENTS.md to reference the canonical script name ("quality" or "test")
accordingly; locate the scripts by their names "quality" and "test" in
package.json to make the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d694604d-ea81-4440-85e0-13ed6cc488ea
📒 Files selected for processing (3)
.github/workflows/quality.yamlAGENTS.mdpackage.json
a425de8 to
9f81bb3
Compare
* chore: add project AGENTS and quality command * chore: remove stale bundle validation note
Summary
Testing