Conversation
- Configure Vitest to output lcov coverage to coverage/unit/ - Add vite-plugin-istanbul for Playwright CT code instrumentation (behind COVERAGE env var) - Create coverage collection fixture (tests/utils/coverage.ts) that extracts window.__coverage__ from Playwright pages and writes to .nyc_output - Add nyc for merging Istanbul coverage data into lcov reports - Add coverage scripts: test:coverage:unit, test:coverage:ct, test:coverage:all - Update CI workflow to run tests with coverage and upload to Codecov with separate flags (frontend-unit, frontend-component) - Add codecov.yml with flag configuration and carryforward enabled https://claude.ai/code/session_01MvrkzFX6F21ZtGwSsiF6vq
Replace all `@playwright/experimental-ct-react` imports in 137 test files with the `./utils/coverage` fixture. This enables automatic Istanbul coverage collection across all component tests when COVERAGE=true, with zero overhead when disabled. Also update coverage fixture to re-export MountResult type for tests that depend on it. https://claude.ai/code/session_01MvrkzFX6F21ZtGwSsiF6vq
PR Review: Add code coverage reporting for frontend testsOverall this is a clean, well-structured approach to adding coverage collection. The architecture is sound - Istanbul instrumentation for Playwright CT, v8 for Vitest, gated behind a CriticalExisting CT test files do not import from the new fixture The coverage fixture in // Existing test files still use:
import { test, expect } from "@playwright/experimental-ct-react";
// They need to use:
import { test, expect } from "./utils/coverage";This migration either needs to happen in this PR or the CI step should not upload coverage until the migration is complete - otherwise Codecov will show misleadingly low component-test numbers. Issues
The merge script copies two separate lcov files into a directory rather than merging them: "test:coverage:merge": "mkdir -p coverage/merged && cp coverage/unit/lcov.info coverage/merged/lcov-unit.info && cp coverage/ct/lcov.info coverage/merged/lcov-ct.info"Codecov handles merging by flag server-side, so this works functionally - but the script name is misleading. Consider renaming to
This flag instruments code even during production builds if Minor / Suggestions80% patch target may block early PRs patch:
default:
target: 80%With
Both flags point to What is Good
|
- Delete duplicate codecov.yml that would have overridden the existing .codecov.yml (Codecov reads codecov.yml first), breaking backend config - Add frontend-unit, frontend-component, and backend flags to .codecov.yml alongside existing backend settings (precision, range, ignore patterns) - Add frontend test file ignores to .codecov.yml - Upgrade frontend codecov-action from v5 to v6 to match backend workflow - Pass token via `token:` param (v6 style) instead of env var https://claude.ai/code/session_01MvrkzFX6F21ZtGwSsiF6vq
|
test review comment - please ignore |
|
Code Review: Add code coverage reporting for frontend tests Overall this is a well-structured implementation. The opt-in design avoids runtime overhead in normal test runs and the coverage fixture design is clean. A few issues worth addressing before merge. Issue 1 - test:coverage:unit duplicates test:coverage (DRY violation) These are identical commands in package.json: both run vitest run --coverage --watch=false. The old test:coverage should be removed or aliased. Having both is dead code per CLAUDE.md. Issue 2 - test:coverage:merge does not actually merge coverage The script copies two LCOV files into a directory but does not combine them. True merging requires lcov --add-tracefile or istanbul-merge. The name implies unified coverage which is misleading. Either fix the implementation or rename to test:coverage:collect. Issue 3 - Yarn cache still not re-enabled The PR removes comments that said Temporarily disable cache to debug phantom dependency issue without re-enabling or explaining. If resolved, re-enable caching for better CI performance. If not, deleting the comments loses that context. Issue 4 - Coverage upload silently skipped when CT tests fail test:coverage:ct uses && so if any test fails, nyc report will not run and coverage/ct/lcov.info will not exist. The CI upload step has if: always() so Codecov will try to upload a missing file. With fail_ci_if_error: false this will not block CI, but Codecov sees no data for that run. Consider using semicolon instead of ampersand-ampersand before nyc report if coverage from failing runs is desired. Minor observations: requireEnv: true is redundant since the plugin is only conditionally included when process.env.COVERAGE is already truthy. Harmless but unnecessary. The 80 percent patch target may be aggressive for a first run with no established baseline. Consider target: auto on patches until you have baseline metrics. .gitignore already covers coverage/ so no changes needed there. Coverage fixture design is correct: extending the page fixture for teardown collection is the right approach. forceBuildInstrument: true is required since Playwright CT runs in build mode. UUID-based filenames prevent collisions across parallel workers. |
Code ReviewOverall, this is a well-structured addition of frontend coverage infrastructure. The opt-in Issues1.
These two scripts are exactly the same command. The old 2.
This just copies two separate LCOV files into a folder — it does not produce a merged single report. A proper merge requires 3.
But neither this PR nor the existing backend CI workflow uploads coverage with 4. Metadata tests run without coverage after the main coverage report is generated
The Metadata tests run after the nyc report has already been generated, so their coverage is excluded from the LCOV upload. Probably acceptable given Minor Notes5. The PR does not include a 6.
What's Good
|
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
CLAUDE.mdThis file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. Project OverviewOpenContracts is an AGPL-3.0 enterprise document analytics platform for PDFs and text-based formats. It features a Django/GraphQL backend with PostgreSQL + pgvector, a React/TypeScript frontend with Jotai state management, and pluggable document processing pipelines powered by machine learning models. Baseline Commit Rules
Essential CommandsBackend (Django)# Run backend tests (sequential, use --keepdb to speed up subsequent runs)
docker compose -f test.yml run django python manage.py test --keepdb
# Run backend tests in PARALLEL (recommended - ~4x faster)
# Uses pytest-xdist with 4 workers, --dist loadscope keeps class tests together
docker compose -f test.yml run django pytest -n 4 --dist loadscope
# Run parallel tests with auto-detected worker count (uses all CPU cores)
docker compose -f test.yml run django pytest -n auto --dist loadscope
# Run parallel tests with fresh databases (first run or after schema changes)
docker compose -f test.yml run django pytest -n 4 --dist loadscope --create-db
# Run specific test file
docker compose -f test.yml run django python manage.py test opencontractserver.tests.test_notifications --keepdb
# Run specific test file in parallel
docker compose -f test.yml run django pytest opencontractserver/tests/test_notifications.py -n 4 --dist loadscope
# Run specific test class/method
docker compose -f test.yml run django python manage.py test opencontractserver.tests.test_notifications.TestNotificationModel.test_create_notification --keepdb
# Apply database migrations
docker compose -f local.yml run django python manage.py migrate
# Create new migration
docker compose -f local.yml run django python manage.py makemigrations
# Django shell
docker compose -f local.yml run django python manage.py shell
# Code quality (runs automatically via pre-commit hooks)
pre-commit run --all-filesFrontend (React/TypeScript)cd frontend
# Start development server (proxies to Django on :8000)
yarn start
# Run unit tests (Vitest) - watches by default
yarn test:unit
# Run component tests (Playwright) - CRITICAL: Use --reporter=list to prevent hanging
yarn test:ct --reporter=list
# Run component tests with grep filter
yarn test:ct --reporter=list -g "test name pattern"
# Run E2E tests
yarn test:e2e
# Coverage report
yarn test:coverage
# Linting and formatting
yarn lint
yarn fix-styles
# Build for production
yarn build
# Preview production build locally
yarn serveProduction Deployment# CRITICAL: Always run migrations FIRST in production
docker compose -f production.yml --profile migrate up migrate
# Then start main services
docker compose -f production.yml upHigh-Level ArchitectureBackend ArchitectureStack: Django 4.x + GraphQL (Graphene) + PostgreSQL + pgvector + Celery Key Patterns:
Frontend ArchitectureStack: React 18 + TypeScript + Apollo Client + Jotai (atoms) + PDF.js + Vite Key Patterns:
Data Flow ArchitectureDocument Processing:
GraphQL Permission Flow:
Critical Security Patterns
Critical Concepts
Testing PatternsManual Test ScriptsLocation: When performing manual testing (e.g., testing migrations, verifying database state, testing API endpoints interactively), always document the test steps in a markdown file under Format: # Test: [Brief description]
## Purpose
What this test verifies.
## Prerequisites
- Required state (e.g., "migration at 0058")
- Required data (e.g., "at least one document exists")
## Steps
1. Step one with exact command
```bash
docker compose -f local.yml run --rm django python manage.py shell -c "..."
Expected Results
CleanupCommands to restore original state if needed. Automated Documentation ScreenshotsLocation: Screenshots for documentation are automatically captured during Playwright component tests and committed back to the PR branch by the How it works:
Naming convention (
At least 2 segments required, 3 recommended. All lowercase alphanumeric with single hyphens. Example: import { docScreenshot } from "./utils/docScreenshot";
// After the component renders and assertions pass:
await docScreenshot(page, "badges--celebration-modal--auto-award");Rules:
Release Screenshots (Point-in-Time)For release notes, use Location: import { releaseScreenshot } from "./utils/docScreenshot";
await releaseScreenshot(page, "v3.0.0.b3", "landing-page", { fullPage: true });Key differences from
When to use which:
Authenticated Playwright Testing (Live Frontend Debugging)When you need to interact with the running frontend as an authenticated user (e.g., debugging why a query returns empty results), use Django admin session cookies to authenticate GraphQL requests. Architecture context: The frontend uses Auth0 for authentication, but the Django backend also accepts session cookie auth. Apollo Client sends GraphQL requests directly to Step 1: Set a password for the superuser (one-time setup): docker compose -f local.yml exec django python manage.py shell -c "
from django.contrib.auth import get_user_model
User = get_user_model()
u = User.objects.filter(is_superuser=True).first()
u.set_password('testpass123')
u.save()
print(f'Password set for {u.username}')
"Step 2: Playwright script pattern: const { chromium } = require('playwright');
(async () => {
const browser = await chromium.launch({ headless: true });
const context = await browser.newContext();
const page = await context.newPage();
// Collect console messages for debugging
const consoleMsgs = [];
page.on('console', msg => consoleMsgs.push('[' + msg.type() + '] ' + msg.text()));
// 1. Login to Django admin to get session cookie
await page.goto('http://localhost:8000/admin/login/');
await page.fill('#id_username', '<superuser-username>');
await page.fill('#id_password', 'testpass123');
await page.click('input[type=submit]');
await page.waitForTimeout(2000);
// 2. Extract the session cookie
const cookies = await context.cookies();
const sessionCookie = cookies.find(c => c.name === 'sessionid');
// 3. Intercept GraphQL requests to inject the session cookie
// (needed because Apollo sends cross-origin requests to :8000)
await page.route('**/graphql/**', async (route) => {
const headers = {
...route.request().headers(),
'Cookie': 'sessionid=' + sessionCookie.value,
};
await route.continue({ headers });
});
// 4. Navigate to the frontend page under test
await page.goto('http://localhost:5173/extracts');
await page.waitForTimeout(5000);
// 5. Inspect results
const bodyText = await page.textContent('body');
console.log(bodyText);
await browser.close();
})();Run from the frontend directory (where cd frontend && node /path/to/script.jsKey details:
Alternative — create a session programmatically (no admin login needed): docker compose -f local.yml exec django python manage.py shell -c "
from django.contrib.sessions.backends.db import SessionStore
from django.contrib.auth import get_user_model
User = get_user_model()
user = User.objects.filter(is_superuser=True).first()
session = SessionStore()
session['_auth_user_id'] = str(user.pk)
session['_auth_user_backend'] = 'django.contrib.auth.backends.ModelBackend'
session['_auth_user_hash'] = user.get_session_auth_hash()
session.save()
print(f'Session key: {session.session_key}')
"Then use the printed session key directly in curl or Playwright route interception. Documentation Locations
Branch StrategyThis project follows trunk-based development:
Changelog MaintenanceIMPORTANT: Always update The changelog follows Keep a Changelog format: ## [Unreleased] - YYYY-MM-DD
### Added
- New features
### Fixed
- Bug fixes with file locations and line numbers
### Changed
- Changes to existing functionality
### Technical Details
- Implementation specifics, architectural notesWhen to update:
What to include:
Pre-commit HooksAutomatically run on commit:
Run manually: Common Pitfalls
|
|
TEST ONLY - WILL DELETE |
|
placeholder review |
Code Review: Frontend Coverage ReportingThis is a well-structured PR that adds a genuinely useful capability. The implementation is clean overall - a few items worth addressing before merge. Issues1. Silent failure in coverage fixture - no error handling around page.evaluate() File: If the page context has been closed, navigated, or crashed, try {
const coverage = await page.evaluate(() => (window as any).__coverage__);
if (coverage) {
fs.mkdirSync(COVERAGE_DIR, { recursive: true });
const filePath = path.join(COVERAGE_DIR, `coverage-${crypto.randomUUID()}.json`);
fs.writeFileSync(filePath, JSON.stringify(coverage));
}
} catch {
// Coverage collection is best-effort; do not fail the test
}2. Misleading script name: test:coverage:merge doesn't actually merge File: The 3. The Minor Notes4. Both scripts run the exact same command ( 5. Yarn cache - consider re-enabling The commented-out cache config was removed (correct - no dead code). But if the phantom dependency issue was resolved, re-enabling 6. Coverage output directories likely need .gitignore entries
What is Done Well
|
vite-plugin-istanbul v8 is ESM-only (exports.require is null), so the static top-level import fails when esbuild bundles vite.config.ts as CJS. Replace with a dynamic import() inside an async helper that short-circuits when COVERAGE is unset - this avoids the ESM resolution entirely for normal builds, unit tests, and Docker image builds. Also addresses Claude review comments: - Remove duplicate test:coverage script (identical to test:coverage:unit) - Rename test:coverage:merge to test:coverage:collect (it copies files for Codecov, not a true merge) - Remove orphaned backend flag from .codecov.yml (no upload step uses it)
…ov-reporting-DLWLL
- Wrap page.evaluate() in coverage fixture with try/catch so coverage collection failures don't poison test outcomes - Rename test:coverage:collect to test:coverage:copy (it copies, not merges) - Add test:coverage alias for backward compat with CLAUDE.md docs
|
Code Review: Add code coverage reporting for frontend tests Overall this is a well-structured addition. The architecture is clean - the custom Playwright fixture is a proper drop-in replacement that stays transparent to test files, coverage collection is correctly opt-in, and the ESM dynamic import workaround for vite-plugin-istanbul is the right call given its ESM-only nature. A few things worth addressing before merge: Issues 1.
2. The script copies two LCOV files into 3.
4. Confirm generated artifacts are gitignored The 5. Codecov flags have no coverage targets The new Minor Notes
What is Working Well
|
The vite-plugin-istanbul requireEnv option checked for VITE_COVERAGE in Vite's env system, but the CI script only sets COVERAGE=true in the shell. Since loadIstanbulPlugins() already gates plugin loading on process.env.COVERAGE, requireEnv is redundant and was silently disabling instrumentation. Also ensure the .nyc_output directory exists before running nyc report so the script doesn't fail when no coverage data has been written yet.
…ov-reporting-DLWLL
The test:coverage:ct script used && to chain Playwright and nyc report, so if Playwright exited non-zero (e.g., due to global timeout), the nyc report step was skipped entirely and no lcov.info was generated. Changed && to ; so nyc report always runs and generates coverage from whatever tests completed. Also increased global timeout from 15 to 25 minutes when COVERAGE=true since Istanbul instrumentation adds overhead.
- Remove duplicate test:coverage script (identical to test:coverage:unit) - Add comment explaining why forceBuildInstrument is required (Playwright CT runs in build mode) - Add comment noting Metadata tests run without coverage instrumentation
Code Review: Add code coverage reporting for frontend testsOverall this is a clean, well-structured implementation. The design choices (opt-in via Issues1. Semicolon in "test:coverage:ct": "COVERAGE=true playwright test ...; mkdir -p coverage/ct/.nyc_output && nyc report ..."The 2. "nyc": "^18.0.0"The 3. Metadata tests are excluded from coverage with no path to include them The CI comment acknowledges this: # Metadata tests run separately without coverage instrumentation.
# Their coverage is not included in the LCOV upload above.This is fine for now, but consider whether Minor observations4. The old script 5. "test:coverage:copy": "mkdir -p coverage/merged && cp coverage/unit/lcov.info coverage/merged/lcov-unit.info && cp coverage/ct/lcov.info coverage/merged/lcov-ct.info"If either 6. Empty catch in coverage fixture } catch {
// Coverage collection is best-effort; do not fail the test
}Swallowing all errors makes debugging silent failures difficult. A } catch (err) {
console.warn("[coverage] Failed to collect coverage data:", err);
}Positives worth noting
|
Summary
This PR adds comprehensive code coverage collection and reporting for both unit and component tests in the frontend, with integration to Codecov for tracking coverage metrics over time.
Key Changes
Coverage Collection Infrastructure
vite-plugin-istanbulfor Playwright component testsfrontend/tests/utils/coverage.ts) to collect browser-side coverage datanycto generate LCOV reports from collected coverage dataCI/CD Pipeline Updates
test:coverage:unit: Runs vitest with coveragetest:coverage:ct: Runs Playwright CT with Istanbul instrumentation and generates LCOV reportfrontend-unitandfrontend-component) for granular trackingConfiguration Files
codecov.ymlwith coverage targets (80% for patches, auto for projects) and flag definitionsvite.config.tsto conditionally apply Istanbul instrumentation whenCOVERAGE=truevitestcoverage configuration to use v8 provider and output LCOV format tocoverage/unitnycconfiguration topackage.jsonfor coverage report generationNew Test Utilities
__coverage__object from browser context after each testCOVERAGEenvironment variable to avoid performance impact during normal test runsImplementation Details
The component test coverage collection works by:
COVERAGE=trueenvironment variable is setpage.evaluate()after each test.nyc_outputdirectorynycCLI to generate LCOV reports from the collected datahttps://claude.ai/code/session_01MvrkzFX6F21ZtGwSsiF6vq