This repository is used to enforce atomic Markdown rules against course stage descriptions (e.g. stage_descriptions/**/*.md). It can be run:
- Locally: for quick checks while editing stage descriptions.
- In CI/CD: where it integrates with other repos to lint only changed files in pull requests.
- Atomic Markdown rules: each rule lives in
rules/*.mdwith front-matter and examples. - LLM evaluation: rules are checked by OpenAI models (default:
gpt-5). - Deterministic output: runner enforces strict JSON schema for each rule result.
- CI integration: checks only changed
stage_descriptions/**/*.mdfiles in PRs. - PR feedback: posts a sticky comment with pass/fail results and suggested fixes.
bun installCreate a set of stage descriptions to lint in stage_descriptions/, then run:
bun run devYou can also lint specific files:
bun run dev path/to/file1.md path/to/file2.mdLLM_RULE_EVALUATOR_OPENAI_API_KEY(required)MODELREPORT_PATH(optional; write JSON report here)
Example:
MODEL=gpt-5 bun run dev stage_descriptions/02-blpop-timeout.md- Add new rules in
rules/, each self-contained and unambiguous. - Include Good and Bad examples and a How to fix section.
- Run locally before pushing:
bun run dev stage_descriptions/example.md
--only <glob> limit scanned files, example: "stage_descriptions/**/base-*.md"
--model <name> default: gpt-5
--report <path> default: reports/lint.json
--no-report skip writing the JSON summary
--format <md|html|pdf|all> default: md
--out <path> base path for pretty reports, default: reports/lint
--show-pass-details include rationale for passed rules in console and reports
--include-source embed source text in pretty reports
--expand-source expand embedded source by default in HTML