-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Effect cluster deployment on staging #1165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a new apps/web-cluster service (Docker build/runtime, ECS metadata, health server, runner, shard manager), refactors tracing and RPC wiring in the web app, consolidates domain errors and Loom import workflow, removes legacy workflow apps, updates infra (SST/Pulumi, Vercel OIDC, ECS services), and adds tsdown build/publish configs. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant Next as Next.js API Route
participant BE as Web Backend (HttpLive)
participant RPC as Workflow RPC Client
participant Run as Web-Cluster Runner (RPC Server)
participant DB as MySQL
participant S3 as S3/CloudFront
U->>Next: HTTP request (e.g., POST /api/loom/video)
Next->>BE: apiToHandler(HttpLive)
BE->>RPC: Invoke Workflows.ImportVideo
RPC->>Run: RPC call (HTTP :42169)
Run->>DB: Create video record / store metadata
Run-->>RPC: { videoId }
RPC-->>BE: { videoId }
BE-->>Next: HTTP 200 { videoId }
Next-->>U: Response
note over BE,S3: Signed URLs/credentials resolved via S3Buckets (bucketUrl or CAP creds)
sequenceDiagram
autonumber
participant ECS as ECS Task
participant Meta as ECS Metadata URI
participant Config as Config
participant HealthSrv as Health Server
participant Check as post-deploy/health-check
ECS->>Meta: GET /task (if ECS_CONTAINER_METADATA_URI_V4 set)
Meta-->>ECS: Task JSON (contains IPv4)
ECS->>Config: Read PORT (default 42069)
ECS->>HealthSrv: Start server on PORT
Check->>HealthSrv: GET /health
HealthSrv-->>Check: 200 "ok"
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45–75 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (4)
🧰 Additional context used📓 Path-based instructions (5)apps/web/**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.{ts,tsx,js,jsx,rs}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.{ts,tsx,js,jsx}📄 CodeRabbit inference engine (AGENTS.md)
Files:
apps/web/**/*.{ts,tsx,js,jsx}📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧠 Learnings (2)📚 Learning: 2025-09-22T14:17:47.407ZApplied to files:
📚 Learning: 2025-09-22T14:19:56.010ZApplied to files:
🧬 Code graph analysis (2)apps/web/lib/EffectRuntime.ts (1)
packages/web-domain/src/Loom.ts (8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/web-backend/src/S3Buckets/S3BucketAccess.ts (1)
185-189: Use PutObjectCommandInput for getPresignedPutUrl args
ReplaceS3.PutObjectRequestwithS3.PutObjectCommandInput(omitting"Key"and"Bucket") so the method signature aligns with whatnew PutObjectCommand(...)expects and avoids subtle Body/ACL typing gaps.
🧹 Nitpick comments (6)
packages/web-backend/src/Spaces/SpacesRepo.ts (1)
13-18: PreferArray.get(0)for consistency with existing methods.The implementation is correct, but other methods in this file use
Array.get(0)(lines 36, 53), which is the more idiomatic Effect pattern and maintains consistency.Apply this diff to align with the established pattern:
getById: (spaceId: string) => db .execute((db) => db.select().from(Db.spaces).where(Dz.eq(Db.spaces.id, spaceId)), ) - .pipe(Effect.map((r) => Option.fromNullable(r[0]))), + .pipe(Effect.map(Array.get(0))),apps/web-cluster/package.json (2)
5-5: Dev script uses Deno with permissive flags.The
devscript runs Deno with--allow-all, which grants all permissions. While acceptable for local development, ensure this permissive mode is documented and understood by the team.For production builds, the Dockerfile should use stricter permission flags based on actual requirements.
23-25: Multiple build tools declared—clarify usage.The package includes three build tools:
rolldown,tsdown, andtsup. While each may serve different purposes (bundling, TypeScript building, and packaging), consider documenting which tool is used for what purpose to avoid confusion.packages/web-domain/scripts/generate-openapi.ts (1)
12-14: Consider validating the output file path.The script accepts an arbitrary file path from
process.argv[2]without validation. Consider validating that the path is within an expected directory or has an allowed extension.Example validation:
const outputPath = process.argv[2] ?? "openapi.json"; if (!outputPath.endsWith(".json")) { throw new Error("Output file must have .json extension"); }apps/web-cluster/src/runner/health-server.ts (1)
6-20: Remove inline comments to follow repo guidelinesRepository guidance forbids inline/block comments in TypeScript modules; this file adds several
//comment lines. Please drop the explanatory comments and let the code speak for itself to stay consistent with the shared style.-// Define the router with a single route for the root URL const router = HttpRouter.empty.pipe( HttpRouter.get("/health", HttpServerResponse.text("ok")), ); -// Set up the application server const app = router.pipe(HttpServer.serve()); -// Specify the port const port = process.env.HEALTH_CHECK_PORT ? parseInt(process.env.HEALTH_CHECK_PORT) : 3000; -// Create a server layer with the specified port const ServerLive = NodeHttpServer.layer(() => createServer(), { port });packages/web-domain/src/Loom.ts (1)
67-90: Consider removingorgIdduplication in the HTTP payload.The endpoint payload includes
orgIdin bothcap(line 72) andloom(line 73, viaImportVideoLoomData). Ifcap.orgIdis derived from authenticated user context (viaHttpAuthMiddleware), consider removing it from theloompayload to eliminate redundancy and potential inconsistencies.Alternatively, add validation to ensure
cap.orgId === loom.orgIdbefore processing the request.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (67)
apps/web-cluster/Dockerfile(1 hunks)apps/web-cluster/package.json(1 hunks)apps/web-cluster/scripts/post-deploy.ts(1 hunks)apps/web-cluster/src/cluster/container-metadata.ts(1 hunks)apps/web-cluster/src/health-check.ts(1 hunks)apps/web-cluster/src/runner/health-server.ts(1 hunks)apps/web-cluster/src/runner/index.ts(1 hunks)apps/web-cluster/src/shard-manager.ts(1 hunks)apps/web-cluster/src/shared/database.ts(1 hunks)apps/web-cluster/tsconfig.json(1 hunks)apps/web/app/api/[[...route]]/route.ts(1 hunks)apps/web/app/layout.tsx(0 hunks)apps/web/instrumentation.node.ts(0 hunks)apps/web/instrumentation.ts(1 hunks)apps/web/lib/EffectRuntime.ts(1 hunks)apps/web/lib/server.ts(3 hunks)apps/web/lib/tracing-server.ts(0 hunks)apps/web/lib/tracing.ts(1 hunks)apps/web/next.config.mjs(2 hunks)apps/web/package.json(2 hunks)apps/workflow-manager/package.json(0 hunks)apps/workflow-manager/src/index.ts(0 hunks)apps/workflow-runner/package.json(0 hunks)apps/workflow-runner/src/index.ts(0 hunks)infra/package.json(1 hunks)infra/sst-env.d.ts(1 hunks)infra/sst.config.ts(6 hunks)package.json(1 hunks)packages/config/base.tsconfig.json(1 hunks)packages/database/package.json(2 hunks)packages/database/tsdown.config.ts(1 hunks)packages/database/types/index.ts(1 hunks)packages/env/build.ts(1 hunks)packages/env/package.json(1 hunks)packages/env/tsdown.config.ts(1 hunks)packages/utils/package.json(1 hunks)packages/utils/src/constants/s3.ts(0 hunks)packages/utils/src/index.ts(0 hunks)packages/utils/tsdown.config.ts(1 hunks)packages/web-backend/package.json(1 hunks)packages/web-backend/src/Auth.ts(1 hunks)packages/web-backend/src/Database.ts(1 hunks)packages/web-backend/src/Folders/index.ts(1 hunks)packages/web-backend/src/Http/Errors.ts(1 hunks)packages/web-backend/src/Http/Live.ts(1 hunks)packages/web-backend/src/Loom/Http.ts(1 hunks)packages/web-backend/src/Loom/ImportVideo.ts(5 hunks)packages/web-backend/src/Loom/index.ts(1 hunks)packages/web-backend/src/S3Buckets/S3BucketAccess.ts(2 hunks)packages/web-backend/src/S3Buckets/index.ts(4 hunks)packages/web-backend/src/Spaces/SpacesRepo.ts(2 hunks)packages/web-backend/src/Workflows.ts(1 hunks)packages/web-backend/src/index.ts(1 hunks)packages/web-backend/tsdown.config.ts(1 hunks)packages/web-domain/package.json(1 hunks)packages/web-domain/scripts/generate-openapi.ts(1 hunks)packages/web-domain/src/Database.ts(1 hunks)packages/web-domain/src/Http/Api.ts(1 hunks)packages/web-domain/src/Http/Errors.ts(1 hunks)packages/web-domain/src/Http/index.ts(1 hunks)packages/web-domain/src/Loom.ts(1 hunks)packages/web-domain/src/Policy.ts(2 hunks)packages/web-domain/src/S3Bucket.ts(1 hunks)packages/web-domain/src/Video.ts(2 hunks)packages/web-domain/src/index.ts(1 hunks)packages/web-domain/tsconfig.json(1 hunks)packages/web-domain/tsdown.config.ts(1 hunks)
💤 Files with no reviewable changes (9)
- packages/utils/src/index.ts
- apps/web/lib/tracing-server.ts
- apps/web/instrumentation.node.ts
- apps/web/app/layout.tsx
- packages/utils/src/constants/s3.ts
- apps/workflow-runner/package.json
- apps/workflow-manager/package.json
- apps/workflow-manager/src/index.ts
- apps/workflow-runner/src/index.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (CLAUDE.md)
Do not add inline, block, or docstring comments in any language; code must be self-explanatory
Files:
packages/web-domain/src/S3Bucket.tsapps/web-cluster/src/cluster/container-metadata.tspackages/web-domain/src/Http/Errors.tspackages/web-domain/src/Http/Api.tspackages/web-domain/src/index.tspackages/database/tsdown.config.tsapps/web/instrumentation.tspackages/web-backend/src/index.tsinfra/sst.config.tspackages/web-domain/src/Database.tspackages/web-backend/src/Loom/index.tsapps/web/lib/tracing.tspackages/web-backend/tsdown.config.tsinfra/sst-env.d.tspackages/web-domain/src/Http/index.tspackages/env/tsdown.config.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/Http/Live.tsapps/web-cluster/scripts/post-deploy.tspackages/web-backend/src/Workflows.tspackages/database/types/index.tsapps/web/lib/server.tspackages/env/build.tspackages/utils/tsdown.config.tsapps/web-cluster/src/runner/health-server.tspackages/web-domain/src/Policy.tspackages/web-backend/src/Auth.tspackages/web-backend/src/S3Buckets/index.tsapps/web/lib/EffectRuntime.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-backend/src/Folders/index.tsapps/web-cluster/src/health-check.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/Spaces/SpacesRepo.tsapps/web-cluster/src/shared/database.tsapps/web-cluster/src/shard-manager.tspackages/web-backend/src/Loom/Http.tsapps/web-cluster/src/runner/index.tspackages/web-domain/tsdown.config.tsapps/web/app/api/[[...route]]/route.tspackages/web-domain/src/Video.tspackages/web-domain/src/Loom.tspackages/web-backend/src/S3Buckets/S3BucketAccess.tspackages/web-backend/src/Database.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; leverage shared types from packages
**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by runningpnpm format.
Files:
packages/web-domain/src/S3Bucket.tsapps/web-cluster/src/cluster/container-metadata.tspackages/web-domain/src/Http/Errors.tspackages/web-domain/src/Http/Api.tspackages/web-domain/src/index.tspackages/database/tsdown.config.tsapps/web/instrumentation.tspackages/web-backend/src/index.tsinfra/sst.config.tspackages/web-domain/src/Database.tspackages/web-backend/src/Loom/index.tsapps/web/lib/tracing.tspackages/web-backend/tsdown.config.tsinfra/sst-env.d.tspackages/web-domain/src/Http/index.tspackages/env/tsdown.config.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/Http/Live.tsapps/web-cluster/scripts/post-deploy.tspackages/web-backend/src/Workflows.tspackages/database/types/index.tsapps/web/lib/server.tspackages/env/build.tspackages/utils/tsdown.config.tsapps/web-cluster/src/runner/health-server.tspackages/web-domain/src/Policy.tspackages/web-backend/src/Auth.tspackages/web-backend/src/S3Buckets/index.tsapps/web/lib/EffectRuntime.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-backend/src/Folders/index.tsapps/web-cluster/src/health-check.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/Spaces/SpacesRepo.tsapps/web-cluster/src/shared/database.tsapps/web-cluster/src/shard-manager.tspackages/web-backend/src/Loom/Http.tsapps/web-cluster/src/runner/index.tspackages/web-domain/tsdown.config.tsapps/web/app/api/[[...route]]/route.tspackages/web-domain/src/Video.tspackages/web-domain/src/Loom.tspackages/web-backend/src/S3Buckets/S3BucketAccess.tspackages/web-backend/src/Database.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g.,user-menu.tsx).
Use PascalCase for React/Solid components.
Files:
packages/web-domain/src/S3Bucket.tsapps/web-cluster/src/cluster/container-metadata.tspackages/web-domain/src/Http/Errors.tspackages/web-domain/src/Http/Api.tspackages/web-domain/src/index.tspackages/database/tsdown.config.tsapps/web/instrumentation.tspackages/web-backend/src/index.tsinfra/sst.config.tspackages/web-domain/src/Database.tspackages/web-backend/src/Loom/index.tsapps/web/lib/tracing.tspackages/web-backend/tsdown.config.tsinfra/sst-env.d.tspackages/web-domain/src/Http/index.tspackages/env/tsdown.config.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/Http/Live.tsapps/web-cluster/scripts/post-deploy.tspackages/web-backend/src/Workflows.tspackages/database/types/index.tsapps/web/lib/server.tspackages/env/build.tspackages/utils/tsdown.config.tsapps/web-cluster/src/runner/health-server.tspackages/web-domain/src/Policy.tspackages/web-backend/src/Auth.tspackages/web-backend/src/S3Buckets/index.tsapps/web/lib/EffectRuntime.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-backend/src/Folders/index.tsapps/web-cluster/src/health-check.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/Spaces/SpacesRepo.tsapps/web-cluster/src/shared/database.tsapps/web-cluster/src/shard-manager.tspackages/web-backend/src/Loom/Http.tsapps/web-cluster/src/runner/index.tspackages/web-domain/tsdown.config.tsapps/web/app/api/[[...route]]/route.tspackages/web-domain/src/Video.tspackages/web-domain/src/Loom.tspackages/web-backend/src/S3Buckets/S3BucketAccess.tspackages/web-backend/src/Database.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and data fetching in the web app
Web mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData rather than broad invalidations
Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components
Files:
apps/web/instrumentation.tsapps/web/lib/tracing.tsapps/web/lib/server.tsapps/web/lib/EffectRuntime.tsapps/web/app/api/[[...route]]/route.ts
apps/web/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
On the client, always use
useEffectQueryoruseEffectMutationfrom@/lib/EffectRuntime; never callEffectRuntime.run*directly in components.
Files:
apps/web/instrumentation.tsapps/web/lib/tracing.tsapps/web/lib/server.tsapps/web/lib/EffectRuntime.tsapps/web/app/api/[[...route]]/route.ts
apps/web/app/api/**/route.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/app/api/**/route.ts: Place API routes only under apps/web/app/api and implement each route in a route.ts file
Construct API routes with @effect/platform HttpApi/HttpApiBuilder and export only the handler from apiToHandler(ApiLive)
Map domain errors to transport errors with HttpApiError.* and keep error translation exhaustive
Use HttpAuthMiddleware for required auth and provideOptionalAuth for guest routes; avoid duplicate session lookups
Provide dependencies via Layer.provide in API routes instead of manual provideService calls
Files:
apps/web/app/api/[[...route]]/route.ts
apps/web/app/**/*.{tsx,ts}
📄 CodeRabbit inference engine (CLAUDE.md)
Prefer Server Components for initial data in the Next.js App Router and pass initialData to client components
Files:
apps/web/app/api/[[...route]]/route.ts
🧠 Learnings (7)
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Map domain errors to transport errors with HttpApiError.* and keep error translation exhaustive
Applied to files:
packages/web-backend/src/Http/Errors.tsapps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Construct API routes with effect/platform HttpApi/HttpApiBuilder and export only the handler from apiToHandler(ApiLive)
Applied to files:
packages/web-backend/src/Http/Live.tsapps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Place API routes only under apps/web/app/api and implement each route in a route.ts file
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Provide dependencies via Layer.provide in API routes instead of manual provideService calls
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:19:56.010Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:56.010Z
Learning: Applies to apps/web/app/api/* : Convert effectful APIs to Next.js handlers using `apiToHandler(ApiLive)` and export only the returned `handler`; avoid calling `runPromise` directly in route files.
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:19:56.010Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:56.010Z
Learning: Applies to apps/web/app/api/* : API routes in Next.js (`apps/web/app/api/*`) must use `effect/platform`'s `HttpApi` builder and follow the existing class/group/endpoint pattern; avoid ad-hoc handlers.
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:19:56.010Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:56.010Z
Learning: Applies to apps/web/app/api/* : Acquire backend services inside `Effect.gen` and wire them using `Layer.provide`/`HttpApiBuilder.group`, translating errors to `HttpApiError` as appropriate.
Applied to files:
apps/web/app/api/[[...route]]/route.ts
🧬 Code graph analysis (16)
packages/web-domain/src/S3Bucket.ts (2)
packages/web-backend/src/Workflows.ts (1)
Workflows(9-9)packages/web-domain/src/index.ts (1)
S3Error(11-11)
packages/web-domain/src/Http/Api.ts (1)
packages/web-domain/src/Loom.ts (1)
LoomHttpApi(67-90)
packages/web-domain/src/index.ts (1)
packages/web-domain/src/S3Bucket.ts (1)
S3Bucket(6-14)
apps/web/instrumentation.ts (1)
apps/web/instrumentation.node.ts (1)
register(16-41)
packages/web-backend/src/Http/Errors.ts (3)
packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/S3Bucket.ts (1)
S3Error(20-22)packages/web-domain/src/index.ts (1)
S3Error(11-11)
packages/web-backend/src/Http/Live.ts (2)
packages/web-backend/src/index.ts (1)
HttpLive(4-4)packages/web-backend/src/Loom/Http.ts (1)
LoomHttpLive(9-40)
apps/web-cluster/scripts/post-deploy.ts (1)
apps/web/tools/compress-images.js (1)
fs(12-12)
apps/web/lib/server.ts (2)
packages/web-backend/src/Workflows.ts (2)
Workflows(9-9)RpcClient(13-19)apps/web/lib/tracing.ts (1)
layerTracer(5-32)
apps/web/lib/EffectRuntime.ts (1)
apps/web/lib/Rpcs.ts (1)
Rpc(11-14)
packages/web-backend/src/Loom/ImportVideo.ts (6)
packages/web-backend/src/Videos/index.ts (1)
Videos(11-171)packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets(13-204)packages/web-backend/src/Workflows.ts (1)
HttpClient(24-27)packages/web-domain/src/Loom.ts (2)
VideoURLInvalidError(23-27)ExternalLoomError(17-21)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/Video.ts (1)
Video(14-59)
packages/web-domain/scripts/generate-openapi.ts (1)
packages/web-domain/src/Http/Api.ts (1)
ApiContract(4-12)
apps/web-cluster/src/shard-manager.ts (1)
apps/web-cluster/src/shared/database.ts (2)
ShardDatabaseLive(12-29)DatabaseLive(4-10)
packages/web-backend/src/Loom/Http.ts (4)
packages/web-backend/src/Workflows.ts (1)
Workflows(9-9)packages/web-domain/src/Authentication.ts (1)
CurrentUser(7-10)packages/web-domain/src/Policy.ts (1)
Policy(8-12)packages/web-backend/src/Http/Errors.ts (1)
handleDomainError(4-14)
apps/web-cluster/src/runner/index.ts (4)
apps/web-cluster/src/cluster/container-metadata.ts (1)
ContainerMetadata(38-66)apps/web-cluster/src/shared/database.ts (2)
ShardDatabaseLive(12-29)DatabaseLive(4-10)packages/web-backend/src/Rpcs.ts (1)
RpcsLive(13-13)apps/web-cluster/src/runner/health-server.ts (1)
HealthServerLive(22-22)
apps/web/app/api/[[...route]]/route.ts (3)
apps/web/lib/server.ts (1)
apiToHandler(117-136)packages/web-backend/src/Http/Live.ts (1)
HttpLive(7-9)packages/web-backend/src/index.ts (1)
HttpLive(4-4)
packages/web-domain/src/Loom.ts (6)
packages/web-domain/src/Video.ts (1)
Video(14-59)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/S3Bucket.ts (1)
S3Error(20-22)packages/web-domain/src/Http/Errors.ts (1)
InternalServerError(4-8)packages/web-domain/src/Policy.ts (1)
PolicyDeniedError(20-24)packages/web-domain/src/Authentication.ts (1)
HttpAuthMiddleware(12-22)
🪛 Gitleaks (8.28.0)
infra/sst.config.ts
[high] 11-11: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (rust)
🔇 Additional comments (31)
infra/sst-env.d.ts (1)
7-53: Resource map looks consistent.AuroraDB, secret entries, services, and VPC metadata line up with the expected SST resource shapes, so the generated types remain coherent.
packages/web-backend/src/Spaces/SpacesRepo.ts (1)
4-4: LGTM!The
Optionimport is correctly added to support the newgetByIdmethod.packages/web-domain/src/Policy.ts (2)
3-3: LGTM: Import properly added.The
HttpApiSchemaimport from@effect/platformis correctly added and used on line 23.
20-24: HTTP endpoints correctly return 403 for PolicyDeniedError. All API groups include.addError(PolicyDeniedError), and the error carriesHttpApiSchema.annotations({ status: 403 }), ensuring HTTP 403 responses.apps/web/next.config.mjs (2)
31-38: Clarify the inconsistent package optimization strategy.There are inconsistencies in the package configuration:
@cap/web-api-contractand@cap/web-domainremain intranspilePackages(lines 18-19) but are commented out inoptimizePackageImports(lines 34-35).@cap/envis added totranspilePackagesbut not tooptimizePackageImports, while@cap/databaseis added to both.@cap/web-backendappears commented out on line 36 but was never in the original list.Please clarify:
- Is the exclusion from
optimizePackageImportsintentional for these packages? If so, consider adding inline comments explaining why certain packages need transpilation but not import optimization.- Should
@cap/envalso be added tooptimizePackageImportsfor consistency with@cap/database?- Should the commented packages be removed if they're not needed, or uncommented if they should be included?
15-23: Align optimizePackageImports with transpilePackages
- Add
@cap/envand@cap/databasetooptimizePackageImportsor document why they’re excluded.- Uncomment or remove the commented entries (
@cap/web-api-contract,@cap/web-domain,@cap/web-backend) per the intended optimization strategy.packages/database/types/index.ts (1)
3-3: Type-only import retains module augmentations; change is safe
TypeScript’simport typestill picks up and applies the ambient declarations innext-auth.d.ts, removing runtime overhead without breaking your next-auth type extensions.packages/web-backend/package.json (1)
7-12: LGTM: Build configuration aligns with monorepo patterns.The addition of
publishConfigandtsdownbuild script is consistent with the tsdown-based build strategy adopted across multiple packages in this PR. The dualmainsetup (source for development, dist for publishing) is appropriate.packages/web-domain/tsconfig.json (1)
4-4: LGTM: Includes scripts directory for type-checking.Adding
"scripts/*"to the include array appropriately enables TypeScript processing for the scripts directory introduced in this PR.packages/config/base.tsconfig.json (1)
18-18: LGTM: Incremental compilation improves build performance.Enabling incremental compilation is beneficial for build times, especially in a monorepo with multiple packages and projects.
packages/env/tsdown.config.ts (1)
1-8: LGTM: Standard tsdown configuration.The configuration follows the consistent tsdown pattern adopted across the monorepo. The
platform: "node"andtreeshake: truesettings are appropriate for a server-side package.Note:
dts: falsemeans type declarations won't be generated. If this package is consumed by other workspace packages that need type information, consider enabling declaration generation.packages/env/package.json (1)
7-12: LGTM: Consistent build and publish configuration.The build script and
publishConfigare consistent with the standardized tsdown-based build pattern used across the monorepo packages.packages/web-domain/src/Database.ts (1)
1-6: LGTM: Well-structured domain error type.The
DatabaseErrorclass follows the Effect-TS tagged error pattern correctly. UsingSchema.Unknownfor thecausefield appropriately allows wrapping arbitrary underlying errors while maintaining type safety.infra/package.json (2)
4-4: Verify SST upgrade compatibility. No security advisories found for SST between v3.10.13 and v3.17.14; review SST’s changelog for any breaking changes introduced in intermediate minor releases and ensure your infrastructure code handles them.
11-11: Pulumi version compatibility confirmed.
@pulumi/pulumi@^3.201.0is the current stable release with no known critical issues.packages/web-backend/src/Loom/index.ts (1)
1-1: Removal ofLoomImportVideoexport is safe — no remaining references found.packages/web-domain/src/Video.ts (1)
1-1: LGTM! HTTP status annotation added correctly.The addition of
HttpApiSchema.annotations({ status: 404 })toNotFoundErrorproperly exposes HTTP semantics for this domain error, aligning with the broader HTTP API integration patterns in this PR.Also applies to: 138-138
package.json (1)
33-33: LGTM! tsdown added as build tooling.The addition of
tsdownas a devDependency aligns with the new per-package tsdown configurations introduced across the monorepo (e.g.,packages/database/tsdown.config.ts,apps/web-cluster/tsconfig.json).packages/web-backend/src/index.ts (1)
4-4: LGTM! HttpLive exported for public API surface.The new export makes
HttpLiveaccessible to consumers of the web-backend package, complementing the HTTP API surface introduced in this PR.packages/web-domain/src/Http/index.ts (1)
1-2: LGTM! Clean barrel export for HTTP module.The barrel export pattern properly aggregates HTTP API and error types, providing a clean public interface for the Http module.
packages/web-backend/src/Workflows.ts (1)
1-1: LGTM! Refactored to use namespaced Loom.ImportVideo.The change consolidates the Loom import workflow under the
Loomnamespace, improving code organization. The refactoring fromLoomImportVideotoLoom.ImportVideoaligns with the updated Loom module structure introduced in this PR.Also applies to: 9-9
apps/web-cluster/package.json (1)
9-26: LGTM! Effect ecosystem dependencies properly declared.The package correctly declares dependencies from the Effect ecosystem (@effect/cluster, @effect/platform, @effect/workflow, etc.) and workspace packages (@cap/web-backend, @cap/web-domain). The version ranges appear compatible based on the Effect 3.x ecosystem.
Based on learnings about @effect/platform-node 0.96.1, @effect/sql-mysql2 0.45.1, and related packages.
packages/web-backend/src/Folders/index.ts (1)
3-8: LGTM! Error type centralized to web-domain.The refactoring correctly imports
DatabaseErroras a type from@cap/web-domain, aligning with the broader PR pattern of centralizing error types in the domain package. The type-only import is appropriate and maintains the same type safety while improving code organization.Also applies to: 11-11
packages/web-backend/src/Http/Live.ts (1)
7-9: LGTM!The HTTP layer construction correctly follows the Effect HttpApiBuilder pattern and properly composes the Loom HTTP implementation.
Based on learnings.
packages/web-backend/src/Database.ts (1)
2-2: LGTM!The refactor to use the shared
DatabaseErrorfrom@cap/web-domainpromotes consistency and reduces code duplication across the codebase.Also applies to: 11-11
packages/web-domain/src/index.ts (1)
2-2: API surface expansion looks good; verify S3Error exportThe new barrel exports are fine. Please confirm S3Error is exported from S3Bucket.ts and doesn’t collide with any existing Errors exports.
Also applies to: 5-6, 11-11
apps/web/app/api/[[...route]]/route.ts (2)
6-8: Good centralized handler wiring via apiToHandlerComposition with HttpApiScalar and HttpLive looks correct for the catch‑all route.
Confirm that "/api" here matches your Next.js route base so the mount path isn’t doubled in production.
10-13: OK to map all methods to a single handlerThis is consistent with the HttpApi handler style.
apps/web-cluster/scripts/post-deploy.ts (1)
18-20: Confirm publishConfig merge is safe for runtimeMerging publishConfig into package.json may alter "type", "exports", or "main" at runtime. Ensure this is intended for all @cap packages and won’t break resolution.
packages/web-backend/src/S3Buckets/S3BucketAccess.ts (1)
8-8: Centralized S3Error and error logging look goodImporting S3Error from the domain and logging the inner promise failures improves consistency and observability.
Also applies to: 30-31
packages/web-domain/src/Loom.ts (1)
23-27: Clarify the purpose of thestatusfield.The
VideoURLInvalidErrorhas a dynamicstatus: Schema.Numberfield, but the HTTP annotation specifies a fixedstatus: 404. If thestatusfield is intended to store the external API's status code for debugging purposes, consider renaming it toexternalStatusorloomStatusfor clarity. If the field is not needed, remove it to avoid confusion.
|
|
||
| USER deno | ||
|
|
||
| ENTRYPOINT ["deno", "run", "--allow-all"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Container cannot start: ENTRYPOINT lacks a script.
deno run --allow-all without a target module exits immediately with “Missing script URL” when the container starts, so the image never boots. Please bake the actual runner module into the ENTRYPOINT (or provide a CMD here) so the image can launch on its own.
🤖 Prompt for AI Agents
In apps/web-cluster/Dockerfile around line 24, the ENTRYPOINT currently is just
["deno", "run", "--allow-all"] which causes the container to exit because no
script/module is supplied; update the Dockerfile to include the actual runner
module in the ENTRYPOINT (e.g. ["deno","run","--allow-all","/path/to/app.ts"])
or keep the ENTRYPOINT arguments and add a CMD with the default module path so
the image can start standalone; ensure the path matches where the build copies
the app into the image (use an absolute container path) and keep the exec JSON
form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai this is done so we can choose which script to launch when running the container
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Brendonovich, thank you for clarifying! That makes perfect sense—leaving the ENTRYPOINT flexible allows you to specify which script to run at container launch time (e.g., docker run ... src/shard-manager.ts vs. src/runner/index.ts), which is ideal for a cluster deployment where the same image serves multiple roles. I appreciate the explanation!
✏️ Learnings added
Learnt from: Brendonovich
PR: CapSoftware/Cap#1165
File: apps/web-cluster/Dockerfile:24-24
Timestamp: 2025-10-10T02:06:26.386Z
Learning: In apps/web-cluster/Dockerfile, the ENTRYPOINT ["deno", "run", "--allow-all"] without a script path is intentional to allow choosing which script to launch when running the container, enabling the same image to be used for different services.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| import { FileSystem } from "@effect/platform"; | ||
| import { NodeContext, NodeRuntime } from "@effect/platform-node"; | ||
| import { Effect } from "effect"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Provide NodeFileSystem; otherwise FileSystem service is missing at runtime
The script acquires FileSystem but never provides a Node implementation. Add NodeFileSystem.layer.
Apply:
-import { NodeContext, NodeRuntime } from "@effect/platform-node";
+import { NodeContext, NodeRuntime, NodeFileSystem } from "@effect/platform-node";
@@
-}).pipe(Effect.provide(NodeContext.layer), NodeRuntime.runMain);
+}).pipe(
+ Effect.provide(NodeFileSystem.layer),
+ Effect.provide(NodeContext.layer),
+ NodeRuntime.runMain,
+);Also applies to: 24-24
🤖 Prompt for AI Agents
In apps/web-cluster/scripts/post-deploy.ts around lines 1-3 (and also update
line 24), the FileSystem service is acquired but no Node implementation is
provided; add the NodeFileSystem.layer to the runtime/environment composition so
the FileSystem dependency is satisfied at runtime. Modify the
provision/composition of NodeContext/NodeRuntime to include NodeFileSystem.layer
(import it from @effect/platform-node as needed) alongside existing layers so
the script runs with a concrete FileSystem implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai NodeContext.layer provides FileSystem, among other things
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for NodeContext usage and documentation in the codebase
rg -n "NodeContext\.layer" -A 5 -B 5
# Look for any FileSystem usage patterns with NodeContext
rg -n "FileSystem.*NodeContext|NodeContext.*FileSystem" -A 3 -B 3
# Check if there are other scripts using similar patterns
fd -e ts -e js | xargs rg -l "NodeContext.*FileSystem" | head -10
Length of output: 929
🏁 Script executed:
#!/bin/bash
# Check if NodeFileSystem is used anywhere in the codebase
rg -n "NodeFileSystem" --type ts
# Look at the generate-openapi.ts script to see full pattern
cat packages/web-domain/scripts/generate-openapi.ts
# Search for Effect documentation or comments about NodeContext
rg -n "NodeContext" -B 2 -A 2 | head -30
Length of output: 1688
@Brendonovich, you're absolutely right! I can see from the codebase that packages/web-domain/scripts/generate-openapi.ts uses the exact same pattern—acquiring FileSystem and providing only NodeContext.layer—confirming that NodeContext.layer includes the FileSystem service. My original comment was incorrect. Thank you for the correction!
✏️ Learnings added
Learnt from: Brendonovich
PR: CapSoftware/Cap#1165
File: apps/web-cluster/scripts/post-deploy.ts:1-3
Timestamp: 2025-10-10T02:04:45.547Z
Learning: In Effect's effect/platform-node library, NodeContext.layer provides the FileSystem service (among other Node.js platform services), so there's no need to explicitly provide NodeFileSystem.layer when using FileSystem in Node.js scripts that already provide NodeContext.layer.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| const deps = yield* fs.readDirectory(dotPnpm); | ||
| const capDeps = deps.filter((dep) => dep.startsWith("@cap")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Harden pnpm entry detection and path derivation; pretty-print JSON
- startsWith("@cap") will miss common pnpm folder names (e.g., registry.npmjs.org+@cap+pkg@…). Use a regex test.
- Replace all "+" when deriving the package path.
- Pretty-print JSON for readability.
-const deps = yield* fs.readDirectory(dotPnpm);
-const capDeps = deps.filter((dep) => dep.startsWith("@cap"));
+const deps = yield* fs.readDirectory(dotPnpm);
+const capDeps = deps.filter((dep) => /(^|[+])@cap[+]/.test(dep));
@@
-const pkgName = key.split("@file")[0].replace("+", "/");
+const pkgName = key.split("@file")[0].replace(/\+/g, "/");
@@
-yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson));
+yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson, null, 2) + "\n");Optionally, skip missing paths to avoid hard-fail:
// before readFileString
if (!(yield* fs.exists(pkgJsonPath))) continue;Also applies to: 13-15, 22-22
| const response = await fetch( | ||
| `http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`, | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default the probe port the same way as the server
When HEALTH_CHECK_PORT isn’t set, the URL becomes http://127.0.0.1:undefined/health, so the fetch throws and the script exits with status 1 even though the server defaults to port 3000. Mirror the server’s fallback: compute a numeric port (with default 3000) and interpolate that value in the URL.
- const response = await fetch(
- `http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
- );
+ const port = process.env.HEALTH_CHECK_PORT
+ ? parseInt(process.env.HEALTH_CHECK_PORT, 10)
+ : 3000;
+ const response = await fetch(`http://127.0.0.1:${port}/health`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await fetch( | |
| `http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`, | |
| ); | |
| const port = process.env.HEALTH_CHECK_PORT | |
| ? parseInt(process.env.HEALTH_CHECK_PORT, 10) | |
| : 3000; | |
| const response = await fetch(`http://127.0.0.1:${port}/health`); |
🤖 Prompt for AI Agents
In apps/web-cluster/src/health-check.ts around lines 3 to 5, the code
interpolates process.env.HEALTH_CHECK_PORT directly causing the URL to be
"http://127.0.0.1:undefined/health" when the env var is missing; compute a
numeric port variable with a fallback to 3000 (e.g.
parseInt(process.env.HEALTH_CHECK_PORT || '') || 3000) and use that port
variable in the fetch URL so the health check defaults to the server's port when
the env var is not set.
| Layer.provide(NodeHttpServer.layer(createServer, { port: 42169 })), | ||
| Layer.provide(Videos.Default), | ||
| Layer.provide(S3Buckets.Default), | ||
| Layer.provide(Database.Default), | ||
| Layer.provide(FetchHttpClient.layer), | ||
| Layer.provide(DatabaseLive), | ||
| Layer.provide(TracingLayer), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align the runner’s advertised port with the listener.
RunnerAddress.make falls back to the PORT config (default 42069), but the HTTP server is hard-coded to listen on 42169. In environments where PORT isn’t set, the cluster advertises 42069 while the process listens on 42169, so peers can never connect. Even when PORT is set, we’re still deriving the address and the listener from different sources. Please derive the NodeHttpServer port from the same configuration (e.g., the ContainerMetadata value) so the runner always advertises the port it actually binds to.
🤖 Prompt for AI Agents
In apps/web-cluster/src/runner/index.ts around lines 88 to 94, the HTTP server
is hard-coded to port 42169 while RunnerAddress.make falls back to PORT (default
42069), causing the advertised address to mismatch the actual listener; change
the NodeHttpServer.layer invocation to derive its port from the same
configuration source used by RunnerAddress.make (e.g., ContainerMetadata or the
central Config/Environment value) so both the listener and the advertised
RunnerAddress use the identical port value; pass that configured port into
NodeHttpServer.layer(createServer, { port: <configuredPort> }) (or wire
ContainerMetadata into the layer providing the server) so the process advertises
the exact port it binds to.
| let _env: ReturnType<typeof create>; | ||
|
|
||
| const create = () => | ||
| createEnv({ | ||
| client: { | ||
| NEXT_PUBLIC_IS_CAP: z.string().optional(), | ||
| NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), | ||
| NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(), | ||
| NEXT_PUBLIC_META_PIXEL_ID: z.string().optional(), | ||
| NEXT_PUBLIC_GOOGLE_AW_ID: z.string().optional(), | ||
| NEXT_PUBLIC_WEB_URL: z.string(), | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET: z.string(), | ||
| NEXT_PUBLIC_CAP_AWS_REGION: z.string(), | ||
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: z.string().optional(), | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: z.string().optional(), | ||
| NEXT_PUBLIC_DOCKER_BUILD: z.string().optional(), | ||
| }, | ||
| runtimeEnv: { | ||
| NEXT_PUBLIC_IS_CAP: process.env.NEXT_PUBLIC_IS_CAP, | ||
| NEXT_PUBLIC_POSTHOG_KEY: process.env.NEXT_PUBLIC_POSTHOG_KEY, | ||
| NEXT_PUBLIC_POSTHOG_HOST: process.env.NEXT_PUBLIC_POSTHOG_HOST, | ||
| NEXT_PUBLIC_META_PIXEL_ID: process.env.NEXT_PUBLIC_META_PIXEL_ID, | ||
| NEXT_PUBLIC_GOOGLE_AW_ID: process.env.NEXT_PUBLIC_GOOGLE_AW_ID, | ||
| NEXT_PUBLIC_WEB_URL: | ||
| process.env.WEB_URL ?? process.env.NEXT_PUBLIC_WEB_URL, | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET: | ||
| process.env.CAP_AWS_BUCKET ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET, | ||
| NEXT_PUBLIC_CAP_AWS_REGION: | ||
| process.env.CAP_AWS_REGION ?? process.env.NEXT_PUBLIC_CAP_AWS_REGION, | ||
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: | ||
| process.env.CAP_AWS_ENDPOINT ?? | ||
| process.env.NEXT_PUBLIC_CAP_AWS_ENDPOINT, | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: | ||
| process.env.CAP_AWS_URL ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET_URL, | ||
| NEXT_PUBLIC_DOCKER_BUILD: process.env.NEXT_PUBLIC_DOCKER_BUILD, | ||
| }, | ||
| }); | ||
|
|
||
| // Environment variables that are needed in the build process, and may be incorrect on the client. | ||
| // Some are only provided by `NEXT_PUBLIC`, but others can be provdied at runtime | ||
| export const buildEnv = createEnv({ | ||
| client: { | ||
| NEXT_PUBLIC_IS_CAP: z.string().optional(), | ||
| NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), | ||
| NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(), | ||
| NEXT_PUBLIC_META_PIXEL_ID: z.string().optional(), | ||
| NEXT_PUBLIC_GOOGLE_AW_ID: z.string().optional(), | ||
| NEXT_PUBLIC_WEB_URL: z.string(), | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET: z.string(), | ||
| NEXT_PUBLIC_CAP_AWS_REGION: z.string(), | ||
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: z.string().optional(), | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: z.string().optional(), | ||
| NEXT_PUBLIC_DOCKER_BUILD: z.string().optional(), | ||
| }, | ||
| runtimeEnv: { | ||
| NEXT_PUBLIC_IS_CAP: process.env.NEXT_PUBLIC_IS_CAP, | ||
| NEXT_PUBLIC_POSTHOG_KEY: process.env.NEXT_PUBLIC_POSTHOG_KEY, | ||
| NEXT_PUBLIC_POSTHOG_HOST: process.env.NEXT_PUBLIC_POSTHOG_HOST, | ||
| NEXT_PUBLIC_META_PIXEL_ID: process.env.NEXT_PUBLIC_META_PIXEL_ID, | ||
| NEXT_PUBLIC_GOOGLE_AW_ID: process.env.NEXT_PUBLIC_GOOGLE_AW_ID, | ||
| NEXT_PUBLIC_WEB_URL: process.env.WEB_URL ?? process.env.NEXT_PUBLIC_WEB_URL, | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET: | ||
| process.env.CAP_AWS_BUCKET ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET, | ||
| NEXT_PUBLIC_CAP_AWS_REGION: | ||
| process.env.CAP_AWS_REGION ?? process.env.NEXT_PUBLIC_CAP_AWS_REGION, | ||
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: | ||
| process.env.CAP_AWS_ENDPOINT ?? process.env.NEXT_PUBLIC_CAP_AWS_ENDPOINT, | ||
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: | ||
| process.env.CAP_AWS_URL ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET_URL, | ||
| NEXT_PUBLIC_DOCKER_BUILD: process.env.NEXT_PUBLIC_DOCKER_BUILD, | ||
| export const buildEnv = new Proxy({} as typeof _env, { | ||
| get(_, key) { | ||
| if (!_env) _env = create(); | ||
|
|
||
| return (_env as any)[key]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent _env from being read before assignment.
With --strict, let _env: ReturnType<typeof create>; triggers “Variable '_env' is used before being assigned.” because the proxy reads _env before the first assignment. Allow _env to be undefined and keep the proxy’s exposed type by introducing an Env alias and declaring _env after create.
- let _env: ReturnType<typeof create>;
-
-const create = () =>
+const create = () =>
createEnv({
client: {
NEXT_PUBLIC_IS_CAP: z.string().optional(),
NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(),
NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(),
@@
},
});
+type Env = ReturnType<typeof create>;
+
+let _env: Env | undefined;
+
// Environment variables that are needed in the build process, and may be incorrect on the client.
// Some are only provided by `NEXT_PUBLIC`, but others can be provdied at runtime
-export const buildEnv = new Proxy({} as typeof _env, {
+export const buildEnv = new Proxy({} as Env, {
get(_, key) {
if (!_env) _env = create();
return (_env as any)[key];
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let _env: ReturnType<typeof create>; | |
| const create = () => | |
| createEnv({ | |
| client: { | |
| NEXT_PUBLIC_IS_CAP: z.string().optional(), | |
| NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), | |
| NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(), | |
| NEXT_PUBLIC_META_PIXEL_ID: z.string().optional(), | |
| NEXT_PUBLIC_GOOGLE_AW_ID: z.string().optional(), | |
| NEXT_PUBLIC_WEB_URL: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_BUCKET: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_REGION: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: z.string().optional(), | |
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: z.string().optional(), | |
| NEXT_PUBLIC_DOCKER_BUILD: z.string().optional(), | |
| }, | |
| runtimeEnv: { | |
| NEXT_PUBLIC_IS_CAP: process.env.NEXT_PUBLIC_IS_CAP, | |
| NEXT_PUBLIC_POSTHOG_KEY: process.env.NEXT_PUBLIC_POSTHOG_KEY, | |
| NEXT_PUBLIC_POSTHOG_HOST: process.env.NEXT_PUBLIC_POSTHOG_HOST, | |
| NEXT_PUBLIC_META_PIXEL_ID: process.env.NEXT_PUBLIC_META_PIXEL_ID, | |
| NEXT_PUBLIC_GOOGLE_AW_ID: process.env.NEXT_PUBLIC_GOOGLE_AW_ID, | |
| NEXT_PUBLIC_WEB_URL: | |
| process.env.WEB_URL ?? process.env.NEXT_PUBLIC_WEB_URL, | |
| NEXT_PUBLIC_CAP_AWS_BUCKET: | |
| process.env.CAP_AWS_BUCKET ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET, | |
| NEXT_PUBLIC_CAP_AWS_REGION: | |
| process.env.CAP_AWS_REGION ?? process.env.NEXT_PUBLIC_CAP_AWS_REGION, | |
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: | |
| process.env.CAP_AWS_ENDPOINT ?? | |
| process.env.NEXT_PUBLIC_CAP_AWS_ENDPOINT, | |
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: | |
| process.env.CAP_AWS_URL ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET_URL, | |
| NEXT_PUBLIC_DOCKER_BUILD: process.env.NEXT_PUBLIC_DOCKER_BUILD, | |
| }, | |
| }); | |
| // Environment variables that are needed in the build process, and may be incorrect on the client. | |
| // Some are only provided by `NEXT_PUBLIC`, but others can be provdied at runtime | |
| export const buildEnv = createEnv({ | |
| client: { | |
| NEXT_PUBLIC_IS_CAP: z.string().optional(), | |
| NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), | |
| NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(), | |
| NEXT_PUBLIC_META_PIXEL_ID: z.string().optional(), | |
| NEXT_PUBLIC_GOOGLE_AW_ID: z.string().optional(), | |
| NEXT_PUBLIC_WEB_URL: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_BUCKET: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_REGION: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: z.string().optional(), | |
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: z.string().optional(), | |
| NEXT_PUBLIC_DOCKER_BUILD: z.string().optional(), | |
| }, | |
| runtimeEnv: { | |
| NEXT_PUBLIC_IS_CAP: process.env.NEXT_PUBLIC_IS_CAP, | |
| NEXT_PUBLIC_POSTHOG_KEY: process.env.NEXT_PUBLIC_POSTHOG_KEY, | |
| NEXT_PUBLIC_POSTHOG_HOST: process.env.NEXT_PUBLIC_POSTHOG_HOST, | |
| NEXT_PUBLIC_META_PIXEL_ID: process.env.NEXT_PUBLIC_META_PIXEL_ID, | |
| NEXT_PUBLIC_GOOGLE_AW_ID: process.env.NEXT_PUBLIC_GOOGLE_AW_ID, | |
| NEXT_PUBLIC_WEB_URL: process.env.WEB_URL ?? process.env.NEXT_PUBLIC_WEB_URL, | |
| NEXT_PUBLIC_CAP_AWS_BUCKET: | |
| process.env.CAP_AWS_BUCKET ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET, | |
| NEXT_PUBLIC_CAP_AWS_REGION: | |
| process.env.CAP_AWS_REGION ?? process.env.NEXT_PUBLIC_CAP_AWS_REGION, | |
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: | |
| process.env.CAP_AWS_ENDPOINT ?? process.env.NEXT_PUBLIC_CAP_AWS_ENDPOINT, | |
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: | |
| process.env.CAP_AWS_URL ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET_URL, | |
| NEXT_PUBLIC_DOCKER_BUILD: process.env.NEXT_PUBLIC_DOCKER_BUILD, | |
| export const buildEnv = new Proxy({} as typeof _env, { | |
| get(_, key) { | |
| if (!_env) _env = create(); | |
| return (_env as any)[key]; | |
| const create = () => | |
| createEnv({ | |
| client: { | |
| NEXT_PUBLIC_IS_CAP: z.string().optional(), | |
| NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), | |
| NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(), | |
| NEXT_PUBLIC_META_PIXEL_ID: z.string().optional(), | |
| NEXT_PUBLIC_GOOGLE_AW_ID: z.string().optional(), | |
| NEXT_PUBLIC_WEB_URL: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_BUCKET: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_REGION: z.string(), | |
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: z.string().optional(), | |
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: z.string().optional(), | |
| NEXT_PUBLIC_DOCKER_BUILD: z.string().optional(), | |
| }, | |
| runtimeEnv: { | |
| NEXT_PUBLIC_IS_CAP: process.env.NEXT_PUBLIC_IS_CAP, | |
| NEXT_PUBLIC_POSTHOG_KEY: process.env.NEXT_PUBLIC_POSTHOG_KEY, | |
| NEXT_PUBLIC_POSTHOG_HOST: process.env.NEXT_PUBLIC_POSTHOG_HOST, | |
| NEXT_PUBLIC_META_PIXEL_ID: process.env.NEXT_PUBLIC_META_PIXEL_ID, | |
| NEXT_PUBLIC_GOOGLE_AW_ID: process.env.NEXT_PUBLIC_GOOGLE_AW_ID, | |
| NEXT_PUBLIC_WEB_URL: | |
| process.env.WEB_URL ?? process.env.NEXT_PUBLIC_WEB_URL, | |
| NEXT_PUBLIC_CAP_AWS_BUCKET: | |
| process.env.CAP_AWS_BUCKET ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET, | |
| NEXT_PUBLIC_CAP_AWS_REGION: | |
| process.env.CAP_AWS_REGION ?? process.env.NEXT_PUBLIC_CAP_AWS_REGION, | |
| NEXT_PUBLIC_CAP_AWS_ENDPOINT: | |
| process.env.CAP_AWS_ENDPOINT ?? | |
| process.env.NEXT_PUBLIC_CAP_AWS_ENDPOINT, | |
| NEXT_PUBLIC_CAP_AWS_BUCKET_URL: | |
| process.env.CAP_AWS_URL ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET_URL, | |
| NEXT_PUBLIC_DOCKER_BUILD: process.env.NEXT_PUBLIC_DOCKER_BUILD, | |
| }, | |
| }); | |
| type Env = ReturnType<typeof create>; | |
| let _env: Env | undefined; | |
| // Environment variables that are needed in the build process, and may be incorrect on the client. | |
| // Some are only provided by `NEXT_PUBLIC`, but others can be provdied at runtime | |
| export const buildEnv = new Proxy({} as Env, { | |
| get(_, key) { | |
| if (!_env) _env = create(); | |
| return (_env as any)[key]; | |
| }, | |
| }); |
🤖 Prompt for AI Agents
In packages/env/build.ts around lines 6 to 50, the top-level let _env
declaration causes a TS "used before being assigned" error under --strict
because the proxy getter can read _env before it's assigned; introduce a type
alias (e.g. type Env = ReturnType<typeof create>), declare _env as let _env: Env
| undefined (or move its declaration to after create and type it as Env |
undefined), keep the proxy getter check if (!_env) _env = create(); and update
buildEnv's typing to use Env so the proxy still exposes the correct type while
allowing _env to be undefined until initialized.
| export const handleDomainError = <A, E, R>(e: Effect.Effect<A, E, R>) => | ||
| e.pipe( | ||
| Effect.catchIf( | ||
| (e) => Schema.is(DatabaseError)(e), | ||
| () => new Http.InternalServerError({ cause: "database" }), | ||
| ), | ||
| Effect.catchIf( | ||
| (e) => Schema.is(S3Error)(e), | ||
| () => new Http.InternalServerError({ cause: "database" }), | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return failures from catchIf handlers
Effect.catchIf expects the handler to yield a failing effect. Returning a raw Http.InternalServerError turns the domain failure into a successful value, so the pipeline no longer fails and the HTTP error never surfaces. Wrap each handler with Effect.fail(...) so the translated error propagates correctly.
Effect.catchIf(
(e) => Schema.is(DatabaseError)(e),
- () => new Http.InternalServerError({ cause: "database" }),
+ () => Effect.fail(new Http.InternalServerError({ cause: "database" })),
),
Effect.catchIf(
(e) => Schema.is(S3Error)(e),
- () => new Http.InternalServerError({ cause: "database" }),
+ () => Effect.fail(new Http.InternalServerError({ cause: "database" })),
),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const handleDomainError = <A, E, R>(e: Effect.Effect<A, E, R>) => | |
| e.pipe( | |
| Effect.catchIf( | |
| (e) => Schema.is(DatabaseError)(e), | |
| () => new Http.InternalServerError({ cause: "database" }), | |
| ), | |
| Effect.catchIf( | |
| (e) => Schema.is(S3Error)(e), | |
| () => new Http.InternalServerError({ cause: "database" }), | |
| ), | |
| export const handleDomainError = <A, E, R>(e: Effect.Effect<A, E, R>) => | |
| e.pipe( | |
| Effect.catchIf( | |
| (e) => Schema.is(DatabaseError)(e), | |
| () => Effect.fail(new Http.InternalServerError({ cause: "database" })), | |
| ), | |
| Effect.catchIf( | |
| (e) => Schema.is(S3Error)(e), | |
| () => Effect.fail(new Http.InternalServerError({ cause: "database" })), | |
| ), |
🤖 Prompt for AI Agents
In packages/web-backend/src/Http/Errors.ts around lines 4 to 13, the handlers
passed to Effect.catchIf return raw Http.InternalServerError instances (turning
the pipeline into a success) — wrap each handler return in Effect.fail(...) so
the translated HTTP error is produced as a failing effect and propagates; update
both the DatabaseError and S3Error handlers to return Effect.fail(new
Http.InternalServerError({ cause: "database" })) and Effect.fail(new
Http.InternalServerError({ cause: "database" })) respectively (or adjust the S3
cause string if intended).
| Policy.withPolicy(orgPolicy.isMember(payload.cap.orgId)), | ||
| handleDomainError, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the authenticated orgId when enforcing policy.
The request payload is untrusted, so calling orgPolicy.isMember(payload.cap.orgId) lets a client spoof cap.orgId. Use the user.activeOrganizationId you just resolved from CurrentUser instead.
- Policy.withPolicy(orgPolicy.isMember(payload.cap.orgId)),
+ Policy.withPolicy(
+ orgPolicy.isMember(user.activeOrganizationId),
+ ),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Policy.withPolicy(orgPolicy.isMember(payload.cap.orgId)), | |
| handleDomainError, | |
| Policy.withPolicy( | |
| orgPolicy.isMember(user.activeOrganizationId), | |
| ), | |
| handleDomainError, |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/Http.ts around lines 35 to 36, the policy check
currently uses the untrusted payload value payload.cap.orgId which a client can
spoof; replace that with the authenticated organization id resolved from
CurrentUser (user.activeOrganizationId) so the call becomes
Policy.withPolicy(orgPolicy.isMember(user.activeOrganizationId)). Ensure the
user variable is the same CurrentUser instance previously resolved in the
request handler and remove reliance on payload.cap.orgId for authorization.
| .get(payload.loom.video.downloadUrl) | ||
| .pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); | ||
| .pipe( | ||
| Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })), | ||
| ); | ||
| const contentLength = Headers.get(resp.headers, "content-length").pipe( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Propagate download failures instead of returning them as values.
Inside the download step, Effect.catchAll returns new Loom.ExternalLoomError({ cause }) directly. That turns the failure into a successful result, so resp becomes an error object and the subsequent Headers.get(resp.headers, …) explodes at runtime. We need to fail the effect with the mapped error instead.
- const resp = yield* http
- .get(payload.loom.video.downloadUrl)
- .pipe(
- Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })),
- );
+ const resp = yield* http
+ .get(payload.loom.video.downloadUrl)
+ .pipe(
+ Effect.catchAll((cause) =>
+ Effect.fail(new Loom.ExternalLoomError({ cause })),
+ ),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .get(payload.loom.video.downloadUrl) | |
| .pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); | |
| .pipe( | |
| Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })), | |
| ); | |
| const contentLength = Headers.get(resp.headers, "content-length").pipe( | |
| const resp = yield* http | |
| .get(payload.loom.video.downloadUrl) | |
| .pipe( | |
| Effect.catchAll((cause) => | |
| Effect.fail(new Loom.ExternalLoomError({ cause })), | |
| ), | |
| ); | |
| const contentLength = Headers.get(resp.headers, "content-length").pipe( |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 112-116, the
download Effect uses Effect.catchAll which returns a new Loom.ExternalLoomError
value, turning failures into successful results; change this so the effect fails
with the mapped error instead. Replace the current Effect.catchAll((cause) =>
new Loom.ExternalLoomError({ cause })) with either Effect.catchAll((cause) =>
Effect.fail(new Loom.ExternalLoomError({ cause }))) or simply use
Effect.mapError((cause) => new Loom.ExternalLoomError({ cause }))), so resp
remains a real response and subsequent Headers.get(resp.headers, ...) operates
on a valid response.
| export const ImportVideo = Workflow.make({ | ||
| name: "LoomImportVideo", | ||
| payload: { | ||
| cap: Schema.Struct({ | ||
| userId: Schema.String, | ||
| orgId: Schema.String, | ||
| }), | ||
| loom: Schema.Struct({ | ||
| userId: Schema.String, | ||
| orgId: Schema.String, | ||
| video: Schema.Struct({ | ||
| id: Schema.String, | ||
| name: Schema.String, | ||
| downloadUrl: Schema.String, | ||
| width: Schema.OptionFromNullOr(Schema.Number), | ||
| height: Schema.OptionFromNullOr(Schema.Number), | ||
| fps: Schema.OptionFromNullOr(Schema.Number), | ||
| durationSecs: Schema.OptionFromNullOr(Schema.Number), | ||
| }), | ||
| }), | ||
| loom: ImportVideoLoomData, | ||
| attempt: Schema.optional(Schema.Number), | ||
| }, | ||
| error: LoomImportVideoError, | ||
| success: Schema.Struct({ | ||
| videoId: Video.VideoId, | ||
| }), | ||
| error: Schema.Union( | ||
| DatabaseError, | ||
| Video.NotFoundError, | ||
| S3Error, | ||
| ExternalLoomError, | ||
| VideoURLInvalidError, | ||
| ), | ||
| idempotencyKey: (p) => | ||
| `${p.cap.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Data duplication creates consistency risk.
The workflow payload duplicates orgId in both cap and loom, and the idempotency key inconsistently mixes cap.userId with loom.orgId (line 64). If these fields can diverge, it's unclear which value is authoritative and could lead to incorrect idempotency behavior or data inconsistencies.
Consider one of the following approaches:
- Remove
orgIdfromcapand use onlyloom.orgIdthroughout (update idempotency key top.loom.userIdandp.loom.orgId). - If
caprepresents authenticated context, removeuserIdandorgIdfromloomand reference them fromcaponly. - Add validation to ensure
cap.orgId === loom.orgIdif both must exist.
Apply this diff to use consistent sources in the idempotency key:
idempotencyKey: (p) =>
- `${p.cap.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`,
+ `${p.loom.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const ImportVideo = Workflow.make({ | |
| name: "LoomImportVideo", | |
| payload: { | |
| cap: Schema.Struct({ | |
| userId: Schema.String, | |
| orgId: Schema.String, | |
| }), | |
| loom: Schema.Struct({ | |
| userId: Schema.String, | |
| orgId: Schema.String, | |
| video: Schema.Struct({ | |
| id: Schema.String, | |
| name: Schema.String, | |
| downloadUrl: Schema.String, | |
| width: Schema.OptionFromNullOr(Schema.Number), | |
| height: Schema.OptionFromNullOr(Schema.Number), | |
| fps: Schema.OptionFromNullOr(Schema.Number), | |
| durationSecs: Schema.OptionFromNullOr(Schema.Number), | |
| }), | |
| }), | |
| loom: ImportVideoLoomData, | |
| attempt: Schema.optional(Schema.Number), | |
| }, | |
| error: LoomImportVideoError, | |
| success: Schema.Struct({ | |
| videoId: Video.VideoId, | |
| }), | |
| error: Schema.Union( | |
| DatabaseError, | |
| Video.NotFoundError, | |
| S3Error, | |
| ExternalLoomError, | |
| VideoURLInvalidError, | |
| ), | |
| idempotencyKey: (p) => | |
| `${p.cap.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`, | |
| }); | |
| export const ImportVideo = Workflow.make({ | |
| name: "LoomImportVideo", | |
| payload: { | |
| cap: Schema.Struct({ | |
| userId: Schema.String, | |
| orgId: Schema.String, | |
| }), | |
| loom: ImportVideoLoomData, | |
| attempt: Schema.optional(Schema.Number), | |
| }, | |
| success: Schema.Struct({ | |
| videoId: Video.VideoId, | |
| }), | |
| error: Schema.Union( | |
| DatabaseError, | |
| Video.NotFoundError, | |
| S3Error, | |
| ExternalLoomError, | |
| VideoURLInvalidError, | |
| ), | |
| idempotencyKey: (p) => | |
| `${p.loom.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`, | |
| }); |
🤖 Prompt for AI Agents
In packages/web-domain/src/Loom.ts around lines 43–65, the payload duplicates
orgId between cap and loom and the idempotency key mixes sources; ensure a
single authoritative source and make the idempotency key consistent: add a
runtime validation that cap.orgId === loom.orgId (throw a validation/error if
they differ) and then compute the idempotency key from the authoritative cap
fields, e.g. `${p.cap.userId}-${p.cap.orgId}-${p.loom.video.id}-${p.attempt ??
0}` (or alternatively remove orgId from cap and consistently use loom.orgId
everywhere if you prefer that model).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 22
🧹 Nitpick comments (8)
packages/web-backend/src/Spaces/SpacesRepo.ts (1)
13-18: PreferArray.get(0)for consistency with other methods.The
getByIdmethod usesOption.fromNullable(r[0])whilemembershipForVideo(line 36) andmembership(line 53) useArray.get(0). Both approaches returnOption<T>, butArray.get(0)is more idiomatic for array operations and maintains consistency within this file.Apply this diff to align with the existing pattern:
getById: (spaceId: string) => db .execute((db) => db.select().from(Db.spaces).where(Dz.eq(Db.spaces.id, spaceId)), ) - .pipe(Effect.map((r) => Option.fromNullable(r[0]))), + .pipe(Effect.map(Array.get(0))),packages/web-domain/tsdown.config.ts (1)
1-8: Consider enabling declaration files for domain package.Similar to the
envpackage,web-domainis a shared domain package consumed by other parts of the codebase. Withdts: false, consuming packages won't have TypeScript type information available from the built artifacts.Consider enabling declaration files:
export default defineConfig({ entry: ["./src/index.ts"], platform: "node", treeshake: true, - dts: false, + dts: true, });Note: The AI summary mentions filtering exports containing "email", but this logic is not present in the provided configuration.
packages/web-backend/tsdown.config.ts (1)
1-8: Consider declaration files for consistency.The
web-backendpackage also disables declaration files. If this package exports types or interfaces consumed by other packages, consider enablingdts: truefor consistency with the other shared packages.export default defineConfig({ entry: ["./src/index.ts"], platform: "node", treeshake: true, - dts: false, + dts: true, });packages/web-domain/src/S3Bucket.ts (1)
20-22: Consider more specific error cause type.
S3ErrorusesSchema.Unknownfor thecausefield, which accepts any value and provides no type safety. For better error handling and debugging, consider using a more specific type or union of expected error types.export class S3Error extends Schema.TaggedError<S3Error>()("S3Error", { - cause: Schema.Unknown, + cause: Schema.Union( + Schema.Struct({ code: Schema.String, message: Schema.String }), + Schema.String, + ), }) {}This provides structure while remaining flexible for different error scenarios from the S3 SDK.
packages/web-domain/src/Database.ts (1)
3-6: Consider a more specific cause type for better error context.The
causefield is typed asSchema.Unknown, which is very permissive. For improved type safety and debugging, consider using a discriminated union of specific cause types (e.g.,"ConnectionError" | "QueryError" | "TransactionError").export class DatabaseError extends Schema.TaggedError<DatabaseError>()( "DatabaseError", - { cause: Schema.Unknown }, + { cause: Schema.Literal("ConnectionError", "QueryError", "TransactionError", "Unknown") }, ) {}apps/web-cluster/scripts/post-deploy.ts (1)
22-22: Add indentation to JSON output for readability.The JSON output lacks formatting, making it harder to debug if manual inspection is needed.
Apply this diff:
- yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson)); + yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson, null, 2));apps/web-cluster/Dockerfile (1)
26-28: Document the purpose of exposed ports.Three ports are exposed (8080, 42069, 42169) but their purposes aren't documented. Consider adding comments to clarify:
- Which services listen on each port
- Whether all ports need to be publicly exposed
Add comments:
+# Main application port EXPOSE 8080 +# Cluster communication port EXPOSE 42069 +# Health check port EXPOSE 42169packages/web-backend/src/index.ts (1)
4-4: Filename casing deviates from repo guidelineGuideline: kebab-case for TS module filenames. Consider renaming Http/Live.ts → http/live.ts (and adjusting imports) in a follow-up.
As per coding guidelines
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (67)
apps/web-cluster/Dockerfile(1 hunks)apps/web-cluster/package.json(1 hunks)apps/web-cluster/scripts/post-deploy.ts(1 hunks)apps/web-cluster/src/cluster/container-metadata.ts(1 hunks)apps/web-cluster/src/health-check.ts(1 hunks)apps/web-cluster/src/runner/health-server.ts(1 hunks)apps/web-cluster/src/runner/index.ts(1 hunks)apps/web-cluster/src/shard-manager.ts(1 hunks)apps/web-cluster/src/shared/database.ts(1 hunks)apps/web-cluster/tsconfig.json(1 hunks)apps/web/app/api/[[...route]]/route.ts(1 hunks)apps/web/app/layout.tsx(0 hunks)apps/web/instrumentation.node.ts(0 hunks)apps/web/instrumentation.ts(1 hunks)apps/web/lib/EffectRuntime.ts(1 hunks)apps/web/lib/server.ts(3 hunks)apps/web/lib/tracing-server.ts(0 hunks)apps/web/lib/tracing.ts(1 hunks)apps/web/next.config.mjs(2 hunks)apps/web/package.json(2 hunks)apps/workflow-manager/package.json(0 hunks)apps/workflow-manager/src/index.ts(0 hunks)apps/workflow-runner/package.json(0 hunks)apps/workflow-runner/src/index.ts(0 hunks)infra/package.json(1 hunks)infra/sst-env.d.ts(1 hunks)infra/sst.config.ts(6 hunks)package.json(1 hunks)packages/config/base.tsconfig.json(1 hunks)packages/database/package.json(2 hunks)packages/database/tsdown.config.ts(1 hunks)packages/database/types/index.ts(1 hunks)packages/env/build.ts(1 hunks)packages/env/package.json(1 hunks)packages/env/tsdown.config.ts(1 hunks)packages/utils/package.json(1 hunks)packages/utils/src/constants/s3.ts(0 hunks)packages/utils/src/index.ts(0 hunks)packages/utils/tsdown.config.ts(1 hunks)packages/web-backend/package.json(1 hunks)packages/web-backend/src/Auth.ts(1 hunks)packages/web-backend/src/Database.ts(1 hunks)packages/web-backend/src/Folders/index.ts(1 hunks)packages/web-backend/src/Http/Errors.ts(1 hunks)packages/web-backend/src/Http/Live.ts(1 hunks)packages/web-backend/src/Loom/Http.ts(1 hunks)packages/web-backend/src/Loom/ImportVideo.ts(5 hunks)packages/web-backend/src/Loom/index.ts(1 hunks)packages/web-backend/src/S3Buckets/S3BucketAccess.ts(2 hunks)packages/web-backend/src/S3Buckets/index.ts(4 hunks)packages/web-backend/src/Spaces/SpacesRepo.ts(2 hunks)packages/web-backend/src/Workflows.ts(1 hunks)packages/web-backend/src/index.ts(1 hunks)packages/web-backend/tsdown.config.ts(1 hunks)packages/web-domain/package.json(1 hunks)packages/web-domain/scripts/generate-openapi.ts(1 hunks)packages/web-domain/src/Database.ts(1 hunks)packages/web-domain/src/Http/Api.ts(1 hunks)packages/web-domain/src/Http/Errors.ts(1 hunks)packages/web-domain/src/Http/index.ts(1 hunks)packages/web-domain/src/Loom.ts(1 hunks)packages/web-domain/src/Policy.ts(2 hunks)packages/web-domain/src/S3Bucket.ts(1 hunks)packages/web-domain/src/Video.ts(2 hunks)packages/web-domain/src/index.ts(1 hunks)packages/web-domain/tsconfig.json(1 hunks)packages/web-domain/tsdown.config.ts(1 hunks)
💤 Files with no reviewable changes (9)
- apps/workflow-manager/package.json
- apps/web/app/layout.tsx
- apps/workflow-runner/src/index.ts
- apps/web/instrumentation.node.ts
- packages/utils/src/constants/s3.ts
- apps/workflow-manager/src/index.ts
- apps/workflow-runner/package.json
- apps/web/lib/tracing-server.ts
- packages/utils/src/index.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (CLAUDE.md)
Do not add inline, block, or docstring comments in any language; code must be self-explanatory
Files:
apps/web-cluster/src/health-check.tspackages/utils/tsdown.config.tspackages/web-backend/src/Loom/index.tspackages/web-backend/tsdown.config.tspackages/env/tsdown.config.tspackages/web-domain/src/Policy.tspackages/web-domain/src/Http/index.tspackages/web-backend/src/Loom/Http.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/index.tsapps/web/lib/tracing.tspackages/web-backend/src/Http/Live.tspackages/web-backend/src/Spaces/SpacesRepo.tspackages/web-domain/src/S3Bucket.tsapps/web-cluster/src/cluster/container-metadata.tspackages/web-backend/src/S3Buckets/index.tsapps/web-cluster/src/shard-manager.tspackages/web-domain/src/Database.tspackages/web-backend/src/Workflows.tspackages/web-domain/src/index.tspackages/database/types/index.tsapps/web-cluster/scripts/post-deploy.tsinfra/sst.config.tsapps/web/instrumentation.tspackages/web-domain/tsdown.config.tsapps/web-cluster/src/runner/health-server.tspackages/web-domain/src/Http/Api.tsapps/web/app/api/[[...route]]/route.tspackages/env/build.tspackages/web-domain/src/Http/Errors.tspackages/database/tsdown.config.tsapps/web/lib/EffectRuntime.tspackages/web-backend/src/Database.tspackages/web-domain/src/Video.tspackages/web-backend/src/Folders/index.tspackages/web-backend/src/Auth.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/S3Buckets/S3BucketAccess.tsapps/web-cluster/src/shared/database.tsinfra/sst-env.d.tsapps/web/lib/server.tsapps/web-cluster/src/runner/index.tspackages/web-domain/src/Loom.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; leverage shared types from packages
**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by runningpnpm format.
Files:
apps/web-cluster/src/health-check.tspackages/utils/tsdown.config.tspackages/web-backend/src/Loom/index.tspackages/web-backend/tsdown.config.tspackages/env/tsdown.config.tspackages/web-domain/src/Policy.tspackages/web-domain/src/Http/index.tspackages/web-backend/src/Loom/Http.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/index.tsapps/web/lib/tracing.tspackages/web-backend/src/Http/Live.tspackages/web-backend/src/Spaces/SpacesRepo.tspackages/web-domain/src/S3Bucket.tsapps/web-cluster/src/cluster/container-metadata.tspackages/web-backend/src/S3Buckets/index.tsapps/web-cluster/src/shard-manager.tspackages/web-domain/src/Database.tspackages/web-backend/src/Workflows.tspackages/web-domain/src/index.tspackages/database/types/index.tsapps/web-cluster/scripts/post-deploy.tsinfra/sst.config.tsapps/web/instrumentation.tspackages/web-domain/tsdown.config.tsapps/web-cluster/src/runner/health-server.tspackages/web-domain/src/Http/Api.tsapps/web/app/api/[[...route]]/route.tspackages/env/build.tspackages/web-domain/src/Http/Errors.tspackages/database/tsdown.config.tsapps/web/lib/EffectRuntime.tspackages/web-backend/src/Database.tspackages/web-domain/src/Video.tspackages/web-backend/src/Folders/index.tspackages/web-backend/src/Auth.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/S3Buckets/S3BucketAccess.tsapps/web-cluster/src/shared/database.tsinfra/sst-env.d.tsapps/web/lib/server.tsapps/web-cluster/src/runner/index.tspackages/web-domain/src/Loom.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g.,user-menu.tsx).
Use PascalCase for React/Solid components.
Files:
apps/web-cluster/src/health-check.tspackages/utils/tsdown.config.tspackages/web-backend/src/Loom/index.tspackages/web-backend/tsdown.config.tspackages/env/tsdown.config.tspackages/web-domain/src/Policy.tspackages/web-domain/src/Http/index.tspackages/web-backend/src/Loom/Http.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/index.tsapps/web/lib/tracing.tspackages/web-backend/src/Http/Live.tspackages/web-backend/src/Spaces/SpacesRepo.tspackages/web-domain/src/S3Bucket.tsapps/web-cluster/src/cluster/container-metadata.tspackages/web-backend/src/S3Buckets/index.tsapps/web-cluster/src/shard-manager.tspackages/web-domain/src/Database.tspackages/web-backend/src/Workflows.tspackages/web-domain/src/index.tspackages/database/types/index.tsapps/web-cluster/scripts/post-deploy.tsinfra/sst.config.tsapps/web/instrumentation.tspackages/web-domain/tsdown.config.tsapps/web-cluster/src/runner/health-server.tspackages/web-domain/src/Http/Api.tsapps/web/app/api/[[...route]]/route.tspackages/env/build.tspackages/web-domain/src/Http/Errors.tspackages/database/tsdown.config.tsapps/web/lib/EffectRuntime.tspackages/web-backend/src/Database.tspackages/web-domain/src/Video.tspackages/web-backend/src/Folders/index.tspackages/web-backend/src/Auth.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/S3Buckets/S3BucketAccess.tsapps/web-cluster/src/shared/database.tsinfra/sst-env.d.tsapps/web/lib/server.tsapps/web-cluster/src/runner/index.tspackages/web-domain/src/Loom.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and data fetching in the web app
Web mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData rather than broad invalidations
Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components
Files:
apps/web/lib/tracing.tsapps/web/instrumentation.tsapps/web/app/api/[[...route]]/route.tsapps/web/lib/EffectRuntime.tsapps/web/lib/server.ts
apps/web/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
On the client, always use
useEffectQueryoruseEffectMutationfrom@/lib/EffectRuntime; never callEffectRuntime.run*directly in components.
Files:
apps/web/lib/tracing.tsapps/web/instrumentation.tsapps/web/app/api/[[...route]]/route.tsapps/web/lib/EffectRuntime.tsapps/web/lib/server.ts
apps/web/app/api/**/route.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/app/api/**/route.ts: Place API routes only under apps/web/app/api and implement each route in a route.ts file
Construct API routes with @effect/platform HttpApi/HttpApiBuilder and export only the handler from apiToHandler(ApiLive)
Map domain errors to transport errors with HttpApiError.* and keep error translation exhaustive
Use HttpAuthMiddleware for required auth and provideOptionalAuth for guest routes; avoid duplicate session lookups
Provide dependencies via Layer.provide in API routes instead of manual provideService calls
Files:
apps/web/app/api/[[...route]]/route.ts
apps/web/app/**/*.{tsx,ts}
📄 CodeRabbit inference engine (CLAUDE.md)
Prefer Server Components for initial data in the Next.js App Router and pass initialData to client components
Files:
apps/web/app/api/[[...route]]/route.ts
🧠 Learnings (6)
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Map domain errors to transport errors with HttpApiError.* and keep error translation exhaustive
Applied to files:
packages/web-backend/src/Http/Errors.ts
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Construct API routes with effect/platform HttpApi/HttpApiBuilder and export only the handler from apiToHandler(ApiLive)
Applied to files:
packages/web-backend/src/Http/Live.tsapps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/app/api/**/route.ts : Place API routes only under apps/web/app/api and implement each route in a route.ts file
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:19:56.010Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:56.010Z
Learning: Applies to apps/web/app/api/* : Convert effectful APIs to Next.js handlers using `apiToHandler(ApiLive)` and export only the returned `handler`; avoid calling `runPromise` directly in route files.
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:19:56.010Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:56.010Z
Learning: Applies to apps/web/app/api/* : API routes in Next.js (`apps/web/app/api/*`) must use `effect/platform`'s `HttpApi` builder and follow the existing class/group/endpoint pattern; avoid ad-hoc handlers.
Applied to files:
apps/web/app/api/[[...route]]/route.ts
📚 Learning: 2025-09-22T14:19:56.010Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:56.010Z
Learning: Applies to apps/web/app/api/* : Acquire backend services inside `Effect.gen` and wire them using `Layer.provide`/`HttpApiBuilder.group`, translating errors to `HttpApiError` as appropriate.
Applied to files:
apps/web/app/api/[[...route]]/route.ts
🧬 Code graph analysis (16)
packages/web-backend/src/Loom/Http.ts (4)
packages/web-backend/src/Workflows.ts (1)
Workflows(9-9)packages/web-domain/src/Authentication.ts (1)
CurrentUser(7-10)packages/web-domain/src/Policy.ts (1)
Policy(8-12)packages/web-backend/src/Http/Errors.ts (1)
handleDomainError(4-14)
packages/web-backend/src/Http/Errors.ts (3)
packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/S3Bucket.ts (1)
S3Error(20-22)packages/web-domain/src/index.ts (1)
S3Error(11-11)
packages/web-backend/src/Http/Live.ts (2)
packages/web-backend/src/index.ts (1)
HttpLive(4-4)packages/web-backend/src/Loom/Http.ts (1)
LoomHttpLive(9-40)
packages/web-backend/src/Spaces/SpacesRepo.ts (1)
packages/database/index.ts (1)
db(29-34)
packages/web-domain/src/S3Bucket.ts (1)
packages/web-domain/src/index.ts (1)
S3Error(11-11)
apps/web-cluster/src/shard-manager.ts (1)
apps/web-cluster/src/shared/database.ts (2)
ShardDatabaseLive(12-29)DatabaseLive(4-10)
packages/web-domain/src/index.ts (1)
packages/web-domain/src/S3Bucket.ts (1)
S3Bucket(6-14)
apps/web/instrumentation.ts (1)
apps/web/instrumentation.node.ts (1)
register(16-41)
packages/web-domain/src/Http/Api.ts (1)
packages/web-domain/src/Loom.ts (1)
LoomHttpApi(67-90)
apps/web/app/api/[[...route]]/route.ts (3)
apps/web/lib/server.ts (1)
apiToHandler(117-136)packages/web-backend/src/Http/Live.ts (1)
HttpLive(7-9)packages/web-backend/src/index.ts (1)
HttpLive(4-4)
apps/web/lib/EffectRuntime.ts (1)
apps/web/lib/Rpcs.ts (1)
Rpc(11-14)
packages/web-backend/src/Loom/ImportVideo.ts (5)
packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets(13-204)packages/web-domain/src/Loom.ts (2)
VideoURLInvalidError(23-27)ExternalLoomError(17-21)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/Video.ts (1)
Video(14-59)packages/web-domain/src/S3Bucket.ts (2)
S3Bucket(6-14)S3Error(20-22)
packages/web-domain/scripts/generate-openapi.ts (1)
packages/web-domain/src/Http/Api.ts (1)
ApiContract(4-12)
apps/web/lib/server.ts (2)
packages/web-backend/src/Workflows.ts (2)
Workflows(9-9)RpcClient(13-19)apps/web/lib/tracing.ts (1)
layerTracer(5-32)
apps/web-cluster/src/runner/index.ts (3)
apps/web-cluster/src/cluster/container-metadata.ts (1)
ContainerMetadata(38-66)apps/web-cluster/src/shared/database.ts (2)
ShardDatabaseLive(12-29)DatabaseLive(4-10)apps/web-cluster/src/runner/health-server.ts (1)
HealthServerLive(22-22)
packages/web-domain/src/Loom.ts (6)
packages/web-domain/src/Video.ts (1)
Video(14-59)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/S3Bucket.ts (1)
S3Error(20-22)packages/web-domain/src/Http/Errors.ts (1)
InternalServerError(4-8)packages/web-domain/src/Policy.ts (1)
PolicyDeniedError(20-24)packages/web-domain/src/Authentication.ts (1)
HttpAuthMiddleware(12-22)
🪛 Gitleaks (8.28.0)
infra/sst.config.ts
[high] 11-11: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (rust)
🔇 Additional comments (34)
packages/config/base.tsconfig.json (1)
17-18: Nice call on incremental caching.
incremental: truepairs well with ournoEmittype-check configs and should trim down rebuild times across the monorepo. 👍apps/web/next.config.mjs (2)
20-21: LGTM! Transpilation additions align with infrastructure changes.The additions of
@cap/envand@cap/databasetotranspilePackagesare appropriate and consistent with the broader repository changes to environment and database tooling mentioned in the PR.
34-37: Clarify disabled optimizePackageImports entries
@cap/web-api-contractand@cap/web-domainare still intranspilePackagesbut have been commented out ofoptimizePackageImportswithout any in-code explanation, and no related issues were found in their repos. Please confirm if this change is intentional, provide the rationale, and document it. Also, clarify whether the commented@cap/web-backendline is a future placeholder or should be removed.packages/database/types/index.ts (1)
3-3: Type-only import is safe
TypeScript ^5.8.3 (≥5.0) supports module‐augmentation via type‐only imports, andnext-auth.d.tscontains only ambient declarations (no runtime code), so augmentations are still applied and no runtime side-effects are removed.packages/web-backend/src/S3Buckets/index.ts (3)
89-89: LGTM!Adding
bucketUrlto the CloudFront configuration enables flexible bucket URL configuration, improving the deployment flexibility.
104-104: LGTM!Using the configured
bucketUrlinstead of hardcoded values improves flexibility and correctly integrates the new configuration into CloudFront signed URL generation.
27-44: Verify credentials fallback and error masking
- If both CAP_AWS_ACCESS_KEY/CAP_AWS_SECRET_KEY and VERCEL_AWS_ROLE_ARN are unset,
credentialswill beundefinedand the AWS SDK’s default chain will be used—confirm this matches your auth strategy.getBucketAccessForUsernow converts all failures toOption.none(), potentially swallowing database errors—ensure you’re comfortable with silent error handling here.packages/utils/package.json (1)
7-11: LGTM!The build script and publishConfig setup correctly separates development (using source files) from published artifacts (using built files). This aligns with the monorepo's adoption of tsdown for build tooling.
packages/env/package.json (1)
7-12: LGTM!The build script and publishConfig additions follow the same pattern as other packages adopting tsdown. Since this is a private package, the publishConfig is for internal monorepo publishing only.
apps/web/package.json (1)
48-48: LGTM!The OpenTelemetry dependency additions align with the tracing integration in
apps/web/instrumentation.ts. The versions are compatible with the existing OpenTelemetry SDK packages (2.0.1 for SDK, 1.9.0 for stable API).Also applies to: 74-74
packages/web-domain/src/Video.ts (1)
1-1: LGTM!The addition of
HttpApiSchema.annotations({ status: 404 })toNotFoundErrorfollows the standard pattern for attaching HTTP status metadata to Effect error types. This enables automatic HTTP status code handling in the HTTP API layer.Also applies to: 138-139
apps/web/lib/EffectRuntime.ts (1)
11-15: Confirm removal of client-side tracing is intentionalCommenting out the TracingLayer in apps/web/lib/EffectRuntime.ts (lines 11–15) disables OpenTelemetry client-side tracing while server-side tracing remains in apps/web/instrumentation.ts; please confirm whether we should preserve client tracing or document its removal.
package.json (2)
33-33: LGTM: tsdown adoption.The addition of tsdown as a build tool aligns with the broader changes across the monorepo where multiple packages are adopting tsdown-based builds.
40-40: Approve Node.js engine bump to 24.
Node.js v24.9.0 is available as of October 2025; theengineupdate to"24"is valid.packages/web-domain/src/Policy.ts (1)
3-3: LGTM: HTTP status annotation.Adding the HTTP 403 status annotation to
PolicyDeniedErrorproperly integrates this error type with the HTTP API layer and OpenAPI metadata generation. This is consistent with the broader HTTP API scaffolding being added across the domain.Also applies to: 23-23
packages/env/tsdown.config.ts (1)
1-8: Verify dts: false is intentional.The configuration disables TypeScript declaration file generation (
dts: false). For a shared package like@cap/envthat other packages depend on, you may want to enable declaration files to provide type information to consumers.Is this package intended to be consumed only at runtime, or should it provide type information to other packages? If the latter, consider:
export default defineConfig({ entry: ["./index.ts"], platform: "node", treeshake: true, - dts: false, + dts: true, });packages/web-domain/package.json (1)
7-9: LGTM: Publishing and dependency configuration.The
publishConfigcorrectly points to the built artifacts while maintaining the source path for local development. The@effect/platform-nodeversion (0.96.1) is aligned with the Effect ecosystem packages used throughout the monorepo.Based on learnings.
Also applies to: 19-22
apps/web-cluster/src/shard-manager.ts (1)
1-17: LGTM: Shard manager wiring.The shard manager entrypoint is cleanly wired using Effect's layered architecture:
- SQL-backed storage configuration
- Proper layer composition (database layers → logger → shard manager)
- Runtime launch via
NodeRuntime.runMainThe implementation follows Effect best practices and integrates well with the shared database layers.
packages/web-domain/src/S3Bucket.ts (1)
16-16: Confirm and clean up S3Bucket workflow removal
TheWorkflowsexport inpackages/web-domain/src/S3Bucket.tsis now an empty array, yetapps/web-cluster/src/runner/index.tsstill imports and registers it. Ensure all S3Bucket workflows have been migrated or deprecated, then either re-add the necessary workflow definitions or remove theWorkflowsimport and its registration in the runner.packages/web-domain/src/Http/index.ts (1)
1-2: LGTM!Clean barrel export pattern consolidating Http namespace exports.
packages/web-backend/src/Workflows.ts (2)
1-1: LGTM!Correctly imports the Loom namespace from the domain package for centralized workflow definitions.
7-9: LGTM!The refactoring correctly updates the workflow reference from the local
LoomImportVideoto the domain-packageLoom.ImportVideo, maintaining consistency across the codebase.packages/web-backend/src/Loom/index.ts (1)
1-1: LGTM!Correctly narrows the public API surface by removing
LoomImportVideoand retaining onlyLoomImportVideoLive, consistent with the domain-package workflow refactoring.packages/web-backend/src/Folders/index.ts (1)
3-8: LGTM!Correctly uses a type-only import for
DatabaseErrorfrom the centralized domain package, maintaining clean runtime boundaries while sharing types.apps/web-cluster/tsconfig.json (1)
9-10: Bundling pipeline emits JS via tsdown & rolldown
Thebuildscript (pnpm run --filter=@cap/web-cluster^... build) compiles TS to JS via tsdown across all dependencies, and thedeployscript bundles it with rolldown intooutfor the Deno runner.packages/web-backend/package.json (1)
7-12: LGTM!The
publishConfigand build script additions align with the new tsdown-based build workflow being adopted across the monorepo. The configuration correctly points to the compiled output at./dist/index.js.packages/web-domain/scripts/generate-openapi.ts (1)
1-16: LGTM!The OpenAPI generation script is cleanly implemented using Effect patterns. It correctly uses
OpenApi.fromApito generate the spec and writes formatted JSON output to the specified file.packages/web-backend/src/Database.ts (1)
2-2: LGTM!Centralizing
DatabaseErrorin the@cap/web-domainpackage improves code organization and allows the error type to be shared across the backend and domain layers.packages/web-domain/src/index.ts (1)
2-11: LGTM!The new exports appropriately expose the Database, Http, Loom, and S3Error modules as part of the public API surface. The exports are well-organized and consistent with the existing export patterns.
packages/web-backend/src/Http/Live.ts (1)
1-9: LGTM!The HTTP API composition follows Effect platform patterns correctly. The use of
HttpApiBuilder.apiwithLayer.provideproperly wires theLoomHttpLivelayer into the API contract.Based on learnings: This aligns with the recommended pattern for constructing API routes with Effect platform HttpApi/HttpApiBuilder.
apps/web-cluster/package.json (1)
19-19: @effect/workflow version is valid –^0.10.0is published on npm (latest 0.11.3); installation will succeed.apps/web-cluster/scripts/post-deploy.ts (1)
5-23: Validate post-deploy package.json mutations
- The monorepo defines
publishConfigin multiple packages (web-backend, web-domain, utils, tsconfig, database, env) butpnpm deploydoesn’t apply these overrides by default.- Confirm whether build-time handling via pnpm’s config or your build tool could eliminate the need for mutating
node_modules.- Ensure the modified
package.jsonfiles undernode_modules/.pnpmpersist in the final Docker image and produce valid entry points for Deno.apps/web/lib/tracing.ts (2)
18-30: Layer composition for tracer looks soundMerges provider, tracer, and resource, then sets the tracer. Looks correct.
7-9: Ensure a real OpenTelemetry TracerProvider is registeredThis call uses the global provider; if you haven’t initialized NodeSDK (or another SDK) at startup, all tracing is no-op. Confirm your application entrypoint registers an Otel SDK/TracerProvider before any instrumentation.
|
|
||
| RUN pnpm run --filter=@cap/web-cluster build | ||
| RUN pnpm deploy --filter=@cap/web-cluster out | ||
| RUN cd out && node scripts/post-deploy.ts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Post-deploy script path inconsistency.
Line 15 runs node scripts/post-deploy.ts, but this is a TypeScript file. The script should either be:
- Built to JavaScript first (e.g.,
scripts/post-deploy.js) - Run with a TypeScript runtime (e.g.,
tsxordeno)
Based on the file at apps/web-cluster/scripts/post-deploy.ts, it appears the build process should transpile this to JavaScript.
Verify the build output structure:
🏁 Script executed:
#!/bin/bash
# Description: Check if post-deploy script is being built
# Look for tsconfig or build config that handles scripts
fd -e json "tsconfig" apps/web-cluster -x cat {}
# Check if the out directory structure includes built scripts
rg -n "scripts" apps/web-cluster/package.json -A 3Length of output: 697
🏁 Script executed:
#!/bin/bash
# Show Dockerfile header and context around the `out` directory usage
sed -n '1,50p' apps/web-cluster/Dockerfile
rg -nC2 'out' apps/web-cluster/Dockerfile
# Inspect base TypeScript config to confirm emit settings
fd base.tsconfig.json -x cat {}Length of output: 1387
🏁 Script executed:
sed -n '1,200p' apps/web-cluster/package.jsonLength of output: 1074
🏁 Script executed:
#!/bin/bash
# List scripts directory
ls -R apps/web-cluster/scripts
# Inspect post-deploy.ts
sed -n '1,200p' apps/web-cluster/scripts/post-deploy.tsLength of output: 873
🏁 Script executed:
#!/bin/bash
# Show root package.json to locate deploy script
sed -n '1,200p' package.jsonLength of output: 1956
Fix post-deploy script and tighten Deno permissions
- Dockerfile line 15 runs
node scripts/post-deploy.tsbut tsconfig is set to emit only declarations (no.js). Either emit a JS build forscripts/post-deploy.jsor invoke it with a TS-aware runner (e.g.npx tsx scripts/post-deploy.ts). - Dockerfile line 24’s
ENTRYPOINT ["deno","run","--allow-all"]is overly permissive; restrict to only the flags your app needs (e.g.--allow-read,--allow-net).
🤖 Prompt for AI Agents
In apps/web-cluster/Dockerfile around lines 15 and 24, line 15 runs "node
scripts/post-deploy.ts" but your tsconfig emits only declarations so there is no
.js to run; either change the build to emit a JS file and run that (e.g., build
scripts to out/scripts/post-deploy.js and run node out/scripts/post-deploy.js)
or invoke the TS directly with a ts-aware runner (e.g., replace the command with
npx tsx scripts/post-deploy.ts). Also tighten the Deno ENTRYPOINT at line 24 by
replacing the overly-permissive --allow-all with only the necessary flags (for
example --allow-read and --allow-net or any other minimal permissions your app
requires).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai node v24 is able to run ts files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Does Node.js version 24 support running TypeScript files directly?
💡 Result:
Yes. Node.js v24 includes built-in TypeScript support (type‑stripping) so you can run .ts files directly for "erasable" TypeScript syntax (type annotations). Features that require JS code generation (enums, namespaces, etc.) still need transformation (flags or a runner/third‑party tool). See Node docs for details and limitations. [1][2][3]
Sources
[1] Node.js — Running TypeScript Natively.
[2] Node.js API: Modules — TypeScript (v24 docs).
[3] Node.js 23.6 Now Runs TypeScript Natively (news coverage).
@Brendonovich, you're absolutely right! Node.js v24 includes built-in TypeScript support through type-stripping, so it can run .ts files directly. [1][2] Thank you for the correction—I've noted this for future reviews.
Sources:
[1] Node.js — Running TypeScript Natively
[2] Node.js API: Modules — TypeScript (v24 docs)
[3] Node.js 23.6 Now Runs TypeScript Natively
✏️ Learnings added
Learnt from: Brendonovich
PR: CapSoftware/Cap#1165
File: apps/web-cluster/Dockerfile:15-15
Timestamp: 2025-10-10T02:06:05.419Z
Learning: Node.js version 24 and later includes built-in TypeScript support (type-stripping) and can run .ts files directly without requiring a separate TypeScript runtime or transpilation step for basic type annotations. The command `node script.ts` works natively in Node.js v24+.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
|
|
||
| USER deno | ||
|
|
||
| ENTRYPOINT ["deno", "run", "--allow-all"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restrict Deno permissions in production.
The ENTRYPOINT uses --allow-all, which grants all permissions to the Deno runtime. This is a security risk in production. Specify only the minimum required permissions.
Apply this approach to restrict permissions:
-ENTRYPOINT ["deno", "run", "--allow-all"]
+ENTRYPOINT ["deno", "run", "--allow-env", "--allow-net", "--allow-read=/app", "--allow-write=/tmp"]Adjust permissions based on the actual runtime requirements of your application. Review what the application needs access to and grant only those permissions.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ENTRYPOINT ["deno", "run", "--allow-all"] | |
| ENTRYPOINT ["deno", "run", "--allow-env", "--allow-net", "--allow-read=/app", "--allow-write=/tmp"] |
🤖 Prompt for AI Agents
In apps/web-cluster/Dockerfile around line 24, the ENTRYPOINT uses "deno run
--allow-all" which grants excessive permissions; replace --allow-all with a
minimal set of explicit Deno flags required at runtime (for example only
--allow-net with specific hosts/ports, --allow-read limited to necessary
directories, --allow-env for needed env vars, and any needed --allow-write or
--allow-run scoped to specific paths) and update the ENTRYPOINT to use those
explicit flags; also document the chosen permissions in the Dockerfile and
CI/deployment notes and avoid wildcard network or filesystem grants.
| "name": "@cap/web-cluster", | ||
| "type": "module", | ||
| "scripts": { | ||
| "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-all --watch ./src/runner/index.ts\" \"deno run --allow-all --watch ./src/shard-manager.ts\"", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid --allow-all in development scripts.
The --allow-all flag grants all permissions to Deno, which is overly permissive even in development. Specify only the required permissions (e.g., --allow-env, --allow-net, --allow-read, --allow-write).
Consider applying this approach:
- "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-all --watch ./src/runner/index.ts\" \"deno run --allow-all --watch ./src/shard-manager.ts\"",
+ "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/runner/index.ts\" \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/shard-manager.ts\"",Adjust permissions based on actual requirements.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-all --watch ./src/runner/index.ts\" \"deno run --allow-all --watch ./src/shard-manager.ts\"", | |
| "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/runner/index.ts\" \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/shard-manager.ts\"", |
🤖 Prompt for AI Agents
In apps/web-cluster/package.json around line 5, the dev script uses Deno's
--allow-all which is too permissive; replace --allow-all with the minimal
required permissions (for example --allow-env --allow-net --allow-read
--allow-write) for each deno run command, audit the runner and shard-manager to
identify only the needed permissions and add them explicitly to the script, and
verify the commands still work locally after tightening permissions.
| const pkgName = key.split("@file")[0].replace("+", "/"); | ||
| const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fragile path parsing logic.
The string manipulation assumes a specific format (@file delimiter, + separator) without validation. If the pnpm store format changes or package names contain unexpected characters, this will silently fail or produce incorrect paths.
Apply this diff to add validation:
for (const key of capDeps) {
- const pkgName = key.split("@file")[0].replace("+", "/");
+ if (!key.includes("@file")) {
+ console.warn(`Skipping unexpected format: ${key}`);
+ continue;
+ }
+ const pkgName = key.split("@file")[0].replace(/\+/g, "/");
const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const pkgName = key.split("@file")[0].replace("+", "/"); | |
| const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`; | |
| for (const key of capDeps) { | |
| if (!key.includes("@file")) { | |
| console.warn(`Skipping unexpected format: ${key}`); | |
| continue; | |
| } | |
| const pkgName = key.split("@file")[0].replace(/\+/g, "/"); | |
| const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`; |
| const pkgName = key.split("@file")[0].replace("+", "/"); | ||
| const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`; | ||
|
|
||
| let pkgJson = JSON.parse(yield* fs.readFileString(pkgJsonPath)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use Effect-based JSON parsing for proper error handling.
JSON.parse can throw synchronous exceptions, which bypasses Effect's error handling. Use Effect.try or parse within the existing Effect.tryPromise wrapper.
Apply this diff:
- let pkgJson = JSON.parse(yield* fs.readFileString(pkgJsonPath));
+ let pkgJson = yield* Effect.try({
+ try: () => JSON.parse(yield* fs.readFileString(pkgJsonPath)),
+ catch: (error) => new Error(`Failed to parse ${pkgJsonPath}: ${error}`)
+ });Or use Schema.parseJson from @effect/schema for typed validation.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/web-cluster/scripts/post-deploy.ts around line 16, JSON.parse is called
directly on the file contents which can throw synchronously and escape Effect's
error handling; wrap the parse in an Effect-aware construct (e.g., use
Effect.try or perform the parsing inside the existing Effect.tryPromise wrapper)
so parse errors are captured as Effect failures, or alternatively replace with
Schema.parseJson from @effect/schema to validate/parse the JSON within Effect;
update the surrounding Effect pipeline to return or map the resulting error type
instead of letting exceptions escape.
| error: Schema.Union(S3Error, DatabaseError, Loom.ExternalLoomError), | ||
| execute: Effect.gen(function* () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: Narrow declared errors to what the step can actually produce
DatabaseError isn’t produced in this step. Keeping the union minimal improves clarity.
- error: Schema.Union(S3Error, DatabaseError, Loom.ExternalLoomError),
+ error: Schema.Union(S3Error, Loom.ExternalLoomError),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| error: Schema.Union(S3Error, DatabaseError, Loom.ExternalLoomError), | |
| execute: Effect.gen(function* () { | |
| error: Schema.Union(S3Error, Loom.ExternalLoomError), | |
| execute: Effect.gen(function* () { |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 106 to 107, the
step's declared error union includes DatabaseError which this step cannot
produce; remove DatabaseError from the Schema.Union so the error type only
includes S3Error and Loom.ExternalLoomError, and update any imports/usages
accordingly (remove unused DatabaseError import and adjust callers/type
annotations if necessary).
| yield* Effect.log(payload.loom.video.downloadUrl); | ||
| const resp = yield* http |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid logging full external download URL
Download URLs can embed tokens/PII. Log host or redact sensitive parts.
- yield* Effect.log(payload.loom.video.downloadUrl);
+ const u = new URL(payload.loom.video.downloadUrl);
+ yield* Effect.log(`Downloading from host ${u.host}`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| yield* Effect.log(payload.loom.video.downloadUrl); | |
| const resp = yield* http | |
| const u = new URL(payload.loom.video.downloadUrl); | |
| yield* Effect.log(`Downloading from host ${u.host}`); | |
| const resp = yield* http |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 110-111, the code
logs the full external download URL which may contain tokens/PII; instead, parse
the URL and log only non-sensitive metadata such as the hostname and path
summary (or the pathname with query string removed), or redact the query and
token portions before logging; update the log call to build a safe string (e.g.,
host and redacted path or "[REDACTED_QUERY]") and replace the current full-URL
log with that sanitized value.
| Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: catchAll returns a value instead of failing the effect
This turns errors into successes. Wrap in Effect.fail to preserve the declared error type.
- .pipe(
- Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })),
- );
+ .pipe(
+ Effect.catchAll((cause) =>
+ Effect.fail(new Loom.ExternalLoomError({ cause })),
+ ),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })), | |
| ); | |
| .pipe( | |
| Effect.catchAll((cause) => | |
| Effect.fail(new Loom.ExternalLoomError({ cause })), | |
| ), | |
| ); |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 114-115, the
Effect.catchAll handler currently returns a value (new Loom.ExternalLoomError({
cause })) which converts failures into successes; instead, wrap that error in
Effect.fail so the effect remains failed with the declared error type (i.e.,
replace the return with Effect.fail(new Loom.ExternalLoomError({ cause })) so
the cause is preserved and the effect fails properly).
| return yield* repo | ||
| .getForUser(userId) | ||
| .pipe(Effect.option, Effect.map(Option.flatten)); | ||
|
|
||
| return yield* getBucketAccess(customBucket); | ||
| .pipe( | ||
| Effect.option, | ||
| Effect.map(Option.flatten), | ||
| Effect.flatMap(getBucketAccess), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify that silently converting database errors to None is intended.
Effect.option converts all Effect failures (including database connectivity errors, query failures, etc.) to Option.none(), which is then treated the same as "user has no custom bucket." This masks operational issues and makes debugging harder.
If the intent is to gracefully fall back to the default bucket when a user-specific bucket is unavailable, consider handling only "not found" cases explicitly while propagating genuine errors.
Run this script to verify error handling patterns in similar repository methods:
🏁 Script executed:
#!/bin/bash
# Check how other repo methods handle errors
ast-grep --pattern $'class $_ {
$$$
getForUser($$$) {
$$$
}
$$$
}'
# Search for other uses of Effect.option to understand error handling patterns
rg -n 'Effect\.option' -g '*.ts' -C 3Length of output: 504
Do not swallow all failures via Effect.option: In packages/web-backend/src/S3Buckets/index.ts (185–191), Effect.option converts every error—including DB/connectivity issues—to Option.none(). Replace it with logic that maps only “not found” to None and propagates genuine errors.
🤖 Prompt for AI Agents
In packages/web-backend/src/S3Buckets/index.ts around lines 185–191, the code
uses Effect.option which swallows all errors (including DB/connectivity
failures) and turns them into Option.none; change this so only a genuine “not
found” result becomes None while all other errors propagate. Replace
Effect.option with an effect transformation that maps the repository call's
specific not-found error to Option.none (e.g., catch or catchSome that matches
the repo's NotFound/NoSuchEntity error and returns Option.none) and
rethrows/propagates any other errors; then continue with
Effect.map(Option.flatten) and Effect.flatMap(getBucketAccess) as before.
| "scripts": { | ||
| "build": "tsdown", | ||
| "generate-openapi": "node scripts/generate-openapi.ts" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify generate-openapi script execution.
The generate-openapi script runs a TypeScript file directly with node:
"generate-openapi": "node scripts/generate-openapi.ts"Modern Node.js (v20.6+) supports --experimental-strip-types for TypeScript, but this requires the flag or a loader. Without it, running .ts files directly will fail.
Please verify the script works or update it to use a TypeScript runner:
Consider using tsx if not already configured:
-"generate-openapi": "node scripts/generate-openapi.ts"
+"generate-openapi": "tsx scripts/generate-openapi.ts"🏁 Script executed:
#!/bin/bash
# Check if generate-openapi.ts script exists and how it should be executed
fd -t f "generate-openapi.ts" packages/web-domainLength of output: 47
🏁 Script executed:
#!/bin/bash
head -n 20 packages/web-domain/scripts/generate-openapi.ts
grep -R '"tsx"' -n packages/web-domain/package.json
grep -R 'ts-node' -n packages/web-domain/package.jsonLength of output: 489
Use a TypeScript‐aware runner for generate-openapi
The script will fail under plain node since there’s no loader or shebang. In packages/web-domain/package.json:
"scripts": {
- "generate-openapi": "node scripts/generate-openapi.ts"
+ "generate-openapi": "tsx scripts/generate-openapi.ts"
}And add tsx to devDependencies.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "tsdown", | |
| "generate-openapi": "node scripts/generate-openapi.ts" | |
| }, | |
| "scripts": { | |
| "build": "tsdown", | |
| "generate-openapi": "tsx scripts/generate-openapi.ts" | |
| }, |
🤖 Prompt for AI Agents
In packages/web-domain/package.json around lines 10 to 13, the
"generate-openapi" script uses plain node to run a TypeScript file which will
fail at runtime; change the script to use a TypeScript-aware runner (e.g.,
"generate-openapi": "tsx scripts/generate-openapi.ts") and add "tsx" to
devDependencies (install as a devDependency and update package.json) so the TS
file runs correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (17)
packages/env/build.ts (1)
6-7: Unresolved critical issue: TypeScript strict mode error.This is the same issue flagged in the previous review. The variable
_envis typed asReturnType<typeof create>(non-nullable) but is read by the proxy before assignment, causing a TypeScript strict mode error.The previous review provided the correct fix—introduce a type alias and allow
_envto beundefined:-let _env: ReturnType<typeof create>; - const create = () => createEnv({ client: { NEXT_PUBLIC_IS_CAP: z.string().optional(), NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), NEXT_PUBLIC_POSTHOG_HOST: z.string().optional(), @@ -35,11 +33,15 @@ NEXT_PUBLIC_CAP_AWS_ENDPOINT: process.env.CAP_AWS_ENDPOINT ?? process.env.NEXT_PUBLIC_CAP_AWS_ENDPOINT, NEXT_PUBLIC_CAP_AWS_BUCKET_URL: process.env.CAP_AWS_URL ?? process.env.NEXT_PUBLIC_CAP_AWS_BUCKET_URL, NEXT_PUBLIC_DOCKER_BUILD: process.env.NEXT_PUBLIC_DOCKER_BUILD, }, }); +type Env = ReturnType<typeof create>; + +let _env: Env | undefined; + // Environment variables that are needed in the build process, and may be incorrect on the client. // Some are only provided by `NEXT_PUBLIC`, but others can be provdied at runtime -export const buildEnv = new Proxy({} as typeof _env, { +export const buildEnv = new Proxy({} as Env, { get(_, key) { if (!_env) _env = create(); return (_env as any)[key]; }, });packages/web-backend/src/S3Buckets/index.ts (1)
186-192: Effect.option still swallows all failures (duplicate concern).This code segment uses
Effect.optionto convert all errors—including database connectivity failures, query errors, and other operational issues—toOption.none(). This was flagged in a previous review as a major issue because it masks genuine errors and makes debugging nearly impossible.The recommendation remains: replace
Effect.optionwith logic that maps only "not found" errors toNonewhile propagating all other errors.apps/web-cluster/src/shared/database.ts (1)
19-25: Cyclic dependency inonNonebranch causes runtime failure.The
onNonebranch attempts to readMysqlClient.MysqlClientviayield*while constructing the layer that provides it, creating a cycle. WhenSHARD_DATABASE_URLis unset, this fails at runtime with "Missing service: MysqlClient". UseLayer.context<MysqlClient.MysqlClient>()to pass through the existing client fromDatabaseLiveinstead.Apply this diff:
onNone: () => - Effect.gen(function* () { - return Layer.succeed( - MysqlClient.MysqlClient, - yield* MysqlClient.MysqlClient, - ); - }), + Effect.succeed(Layer.context<MysqlClient.MysqlClient>()),packages/database/package.json (1)
63-75: Publish config still omits the new email exportsThe workspace exports expose
./emails/config,./emails/*, and./types, but thepublishConfig.exportsblock never maps them. When we publish, those compiled entry points will be missing, so downstream imports of the new email modules (and types) break. Please mirror every new export to itsdistcounterpart before releasing."exports": { ".": "./dist/index.js", "./auth/auth-options": "./dist/auth/auth-options.js", "./auth/session": "./dist/auth/session.js", "./schema": "./dist/schema.js", "./crypto": "./dist/crypto.js", - "./helpers": "./dist/helpers.js" + "./helpers": "./dist/helpers.js", + "./emails/config": "./dist/emails/config.js", + "./emails/*": "./dist/emails/*.js", + "./types": "./dist/types/index.js" }apps/web/lib/EffectRuntime.ts (1)
1-16: Remove the commented tracing stub and unused imports.The project disallows commented-out code, and without
TracingLayer, theWebSdk/getTracingConfigimports are now unused, tripping lint/typecheck. Please delete the comments and drop those imports (or rewire tracing if still required).-import * as WebSdk from "@effect/opentelemetry/WebSdk"; import { FetchHttpClient } from "@effect/platform"; import { Layer, ManagedRuntime } from "effect"; @@ -import { getTracingConfig } from "./tracing"; - -// const TracingLayer = WebSdk.layer(getTracingConfig); - const RuntimeLayer = Layer.mergeAll( Rpc.Default, - // TracingLayer, FetchHttpClient.layer, );packages/database/tsdown.config.ts (1)
5-5: Flattenexportsbefore filtering.Some entries are conditional objects, so calling
.includes()throws and breaks the build. Please collect only string paths before filtering out"email"entries.- entry: Object.values(pkgJson.exports).filter((s) => !s.includes("email")), + const entry = Object.values(pkgJson.exports) + .flatMap((value) => + typeof value === "string" + ? [value] + : value && typeof value === "object" + ? Object.values(value).filter( + (candidate): candidate is string => typeof candidate === "string", + ) + : [], + ) + .filter((path) => !path.includes("email")); + +export default defineConfig({ + entry,packages/web-backend/src/Loom/Http.ts (2)
21-25: Verify: Workflow invoked with different orgId than policy check.Line 23 passes
user.activeOrganizationIdto the workflow, but line 35 validates membership againstpayload.cap.orgId. If the user targets a different organization they're a member of, the workflow runs under the wrong org.After fixing the policy check to use
user.activeOrganizationId(per the previous comment), confirm whether the workflow should run under the user's active org or support targeting other orgs the user belongs to.
35-36: Critical: Authorization bypass via untrusted input.Line 35 checks membership against
payload.cap.orgId, which is untrusted client input. An attacker can forge this value to bypass authorization. Use the authenticateduser.activeOrganizationIdinstead.The static analysis error confirms
payload.cap.orgIdisstringbut requires a brandedOrganisationIdtype, indicating this field shouldn't be used for authorization.Apply this diff:
- Policy.withPolicy(orgPolicy.isMember(payload.cap.orgId)), + Policy.withPolicy(orgPolicy.isMember(user.activeOrganizationId)),apps/web/instrumentation.ts (1)
4-17: Critical security issues remain unaddressed.The previous review comments flagged two critical security vulnerabilities that are still present:
- Lines 4, 12:
NEXT_PUBLIC_AXIOM_TOKENexposes the authentication token to the browser (client-side). Use a server-only variable name likeAXIOM_TOKEN.- Lines 5-6: Logging the token in plaintext exposes credentials in application logs.
apps/web/package.json (1)
48-48: OpenTelemetry version compatibility issue remains.The previous review identified that
@opentelemetry/api@^1.9.0conflicts with@vercel/otel@2.xpeer dependencies (<1.9.0), causing ERESOLVE errors. Downgrade to a compatible 1.x version (e.g., ^1.8.0) or wait for @vercel/otel to update its peer ranges.Also applies to: 74-74
packages/web-backend/src/Loom/ImportVideo.ts (4)
51-51: Use typed failure instead of Effect.die for RequestError.
Effect.dieconverts the typedRequestErrorinto an unrecoverable defect, breaking the declared error surface and preventing proper retry/error handling.Apply this diff:
- Effect.catchTag("RequestError", Effect.die), + Effect.catchTag("RequestError", (cause) => + Effect.fail(new Loom.ExternalLoomError({ cause })), + ),
110-110: Avoid logging full download URL that may contain tokens.External download URLs often embed authentication tokens or PII in query parameters. Log only non-sensitive metadata like the hostname.
Apply this diff:
- yield* Effect.log(payload.loom.video.downloadUrl); + const url = new URL(payload.loom.video.downloadUrl); + yield* Effect.log(`Downloading from host ${url.host}`);
114-115: Critical bug: catchAll returns value instead of failing.
Effect.catchAllcurrently returnsnew Loom.ExternalLoomError({ cause })directly, which converts the failure into a success with an error object as the value. SubsequentHeaders.get(resp.headers, …)will throw at runtime becauserespis an error object, not a response.Apply this diff:
.pipe( - Effect.catchAll((cause) => new Loom.ExternalLoomError({ cause })), + Effect.catchAll((cause) => + Effect.fail(new Loom.ExternalLoomError({ cause })), + ), );
106-106: Remove DatabaseError from DownloadVideo error unionIn packages/web-backend/src/Loom/ImportVideo.ts at line 106, the DownloadVideo activity only performs S3 and HTTP operations and does not touch the database. Update the error schema:
- error: Schema.Union(S3Error, DatabaseError, Loom.ExternalLoomError), + error: Schema.Union(S3Error, Loom.ExternalLoomError),infra/sst.config.ts (3)
11-12: Hardcoded AXIOM_API_TOKEN remains in sourceThis issue was flagged in previous reviews. The production token is still committed as a string literal and must be moved to a managed SST secret.
288-298: commonEnvironment uses hardcoded AXIOM tokenLine 295 still references the hardcoded
AXIOM_API_TOKENconstant. This was flagged in previous reviews and must usesecrets.AXIOM_API_TOKEN.valueinstead.
175-183: Add AXIOM_API_TOKEN to Secrets functionPer previous review recommendations,
AXIOM_API_TOKENshould be managed here as an SST secret rather than hardcoded. Add it to this function alongside the other secrets.Apply this diff:
function Secrets() { return { DATABASE_URL_HTTP: new sst.Secret("DATABASE_URL_HTTP"), DATABASE_URL_MYSQL: new sst.Secret("DATABASE_URL_MYSQL"), CAP_AWS_ACCESS_KEY: new sst.Secret("CAP_AWS_ACCESS_KEY"), CAP_AWS_SECRET_KEY: new sst.Secret("CAP_AWS_SECRET_KEY"), GITHUB_PAT: new sst.Secret("GITHUB_PAT"), + AXIOM_API_TOKEN: new sst.Secret("AXIOM_API_TOKEN"), }; }
🧹 Nitpick comments (5)
packages/env/build.ts (1)
50-50: Consider avoidinganycast for better type safety.The
(_env as any)[key]cast bypasses TypeScript's type checking. After applying the critical fix above, you can improve type safety by using a type-safe key access pattern.As per coding guidelines (avoid
any), consider this alternative after fixing the critical issue:export const buildEnv = new Proxy({} as Env, { get(_, key) { if (!_env) _env = create(); - return (_env as any)[key]; + return _env[key as keyof Env]; }, });packages/web-backend/src/S3Buckets/index.ts (1)
27-44: Consider extracting credential resolution to improve readability.The nested
Config.option,Config.all,Config.orElse, andConfig.mapchain is deeply indented (7+ levels) and may be difficult to follow during maintenance. The logic is correct—it attempts static credentials first, then falls back to OIDC—but extracting this into a helper function would clarify intent.For example, extract to a helper:
const resolveAwsCredentials = Config.all([ Config.string("CAP_AWS_ACCESS_KEY"), Config.string("CAP_AWS_SECRET_KEY"), ]).pipe( Config.map(([accessKeyId, secretAccessKey]) => ({ accessKeyId, secretAccessKey, })), Config.orElse(() => Config.string("VERCEL_AWS_ROLE_ARN").pipe( Config.map((arn) => awsCredentialsProvider({ roleArn: arn })), ), ), );Then use:
- credentials: Option.getOrUndefined( - yield* Config.option( - Config.all([ - Config.string("CAP_AWS_ACCESS_KEY"), - Config.string("CAP_AWS_SECRET_KEY"), - ]).pipe( - Config.map(([accessKeyId, secretAccessKey]) => ({ - accessKeyId, - secretAccessKey, - })), - Config.orElse(() => - Config.string("VERCEL_AWS_ROLE_ARN").pipe( - Config.map((arn) => awsCredentialsProvider({ roleArn: arn })), - ), - ), - ), - ), - ), + credentials: Option.getOrUndefined( + yield* Config.option(resolveAwsCredentials), + ),packages/web-domain/src/S3Bucket.ts (1)
21-23: Consider a more specific error cause type.The
S3Errorclass follows the correctSchema.TaggedErrorpattern. However,cause: Schema.Unknownaccepts any value, which may obscure type information.If S3 error patterns are well-known, consider defining a more specific cause schema:
export class S3Error extends Schema.TaggedError<S3Error>()("S3Error", { - cause: Schema.Unknown, + cause: Schema.Union( + Schema.Struct({ code: Schema.String, message: Schema.String }), + Schema.Unknown + ), }) {}infra/sst.config.ts (2)
300-313: Hardcoded GitHub username should be parameterizedLine 308 contains the hardcoded username
"brendonovich". Consider creating a constant or secret for this value to improve maintainability and make the configuration reusable.Apply this diff:
+const GITHUB_USERNAME = "brendonovich"; + const ghcrCredentialsSecret = new aws.secretsmanager.Secret( "GHCRCredentialsSecret", ); new aws.secretsmanager.SecretVersion("GHCRCredentialsSecretVersion", { secretId: ghcrCredentialsSecret.id, secretString: secrets.GITHUB_PAT.value.apply((password) => JSON.stringify({ - username: "brendonovich", + username: GITHUB_USERNAME, password, }), ), });
346-361: Consider adding health checks to ShardManagerThe
Runnerservice (lines 372-374) has health checks configured, butShardManagerdoes not. If the shard-manager application exposes a health endpoint, adding a health check would improve reliability and enable ECS to automatically restart unhealthy tasks.Verify whether the shard-manager application exposes a health endpoint. If it does, apply a similar health configuration:
new sst.aws.Service("ShardManager", { cluster, architecture: "arm64", containers: [ { name: "shard-manager", image: "ghcr.io/brendonovich/cap-web-cluster:latest", command: ["src/shard-manager.ts"], + health: { + command: ["CMD", "deno", "run", "--allow-all", "src/health-check.ts"], + }, environment: { ...commonEnvironment, SHARD_MANAGER_HOST: "0.0.0.0", }, }, ], transform: ghcrCredentialsTransform, });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (66)
apps/web-cluster/Dockerfile(1 hunks)apps/web-cluster/package.json(1 hunks)apps/web-cluster/scripts/post-deploy.ts(1 hunks)apps/web-cluster/src/cluster/container-metadata.ts(1 hunks)apps/web-cluster/src/health-check.ts(1 hunks)apps/web-cluster/src/runner/health-server.ts(1 hunks)apps/web-cluster/src/runner/index.ts(1 hunks)apps/web-cluster/src/shard-manager.ts(1 hunks)apps/web-cluster/src/shared/database.ts(1 hunks)apps/web-cluster/tsconfig.json(1 hunks)apps/web/app/api/[[...route]]/route.ts(1 hunks)apps/web/app/layout.tsx(0 hunks)apps/web/instrumentation.node.ts(0 hunks)apps/web/instrumentation.ts(1 hunks)apps/web/lib/EffectRuntime.ts(1 hunks)apps/web/lib/server.ts(4 hunks)apps/web/lib/tracing-server.ts(0 hunks)apps/web/lib/tracing.ts(1 hunks)apps/web/next.config.mjs(2 hunks)apps/web/package.json(2 hunks)apps/workflow-manager/package.json(0 hunks)apps/workflow-manager/src/index.ts(0 hunks)apps/workflow-runner/package.json(0 hunks)apps/workflow-runner/src/index.ts(0 hunks)infra/package.json(1 hunks)infra/sst-env.d.ts(1 hunks)infra/sst.config.ts(6 hunks)package.json(1 hunks)packages/config/base.tsconfig.json(1 hunks)packages/database/package.json(2 hunks)packages/database/tsdown.config.ts(1 hunks)packages/database/types/index.ts(1 hunks)packages/env/build.ts(1 hunks)packages/env/package.json(1 hunks)packages/env/tsdown.config.ts(1 hunks)packages/utils/package.json(1 hunks)packages/utils/src/constants/s3.ts(0 hunks)packages/utils/src/index.ts(0 hunks)packages/utils/tsdown.config.ts(1 hunks)packages/web-backend/package.json(1 hunks)packages/web-backend/src/Auth.ts(1 hunks)packages/web-backend/src/Database.ts(1 hunks)packages/web-backend/src/Folders/index.ts(2 hunks)packages/web-backend/src/Http/Errors.ts(1 hunks)packages/web-backend/src/Http/Live.ts(1 hunks)packages/web-backend/src/Loom/Http.ts(1 hunks)packages/web-backend/src/Loom/ImportVideo.ts(5 hunks)packages/web-backend/src/Loom/index.ts(1 hunks)packages/web-backend/src/S3Buckets/S3BucketAccess.ts(2 hunks)packages/web-backend/src/S3Buckets/index.ts(4 hunks)packages/web-backend/src/Workflows.ts(1 hunks)packages/web-backend/src/index.ts(1 hunks)packages/web-backend/tsdown.config.ts(1 hunks)packages/web-domain/package.json(1 hunks)packages/web-domain/scripts/generate-openapi.ts(1 hunks)packages/web-domain/src/Database.ts(1 hunks)packages/web-domain/src/Http/Api.ts(1 hunks)packages/web-domain/src/Http/Errors.ts(1 hunks)packages/web-domain/src/Http/index.ts(1 hunks)packages/web-domain/src/Loom.ts(1 hunks)packages/web-domain/src/Policy.ts(2 hunks)packages/web-domain/src/S3Bucket.ts(1 hunks)packages/web-domain/src/Video.ts(2 hunks)packages/web-domain/src/index.ts(1 hunks)packages/web-domain/tsconfig.json(1 hunks)packages/web-domain/tsdown.config.ts(1 hunks)
💤 Files with no reviewable changes (9)
- apps/workflow-runner/src/index.ts
- apps/web/lib/tracing-server.ts
- packages/utils/src/constants/s3.ts
- packages/utils/src/index.ts
- apps/workflow-manager/package.json
- apps/workflow-runner/package.json
- apps/web/instrumentation.node.ts
- apps/workflow-manager/src/index.ts
- apps/web/app/layout.tsx
🚧 Files skipped from review as they are similar to previous changes (30)
- packages/web-domain/tsdown.config.ts
- packages/web-domain/src/Http/index.ts
- apps/web-cluster/src/shard-manager.ts
- apps/web-cluster/tsconfig.json
- apps/web-cluster/package.json
- packages/web-backend/tsdown.config.ts
- packages/web-backend/src/Loom/index.ts
- apps/web-cluster/scripts/post-deploy.ts
- packages/web-domain/package.json
- packages/env/tsdown.config.ts
- packages/config/base.tsconfig.json
- packages/web-backend/src/Folders/index.ts
- packages/web-domain/src/Http/Errors.ts
- apps/web-cluster/Dockerfile
- apps/web-cluster/src/health-check.ts
- apps/web/app/api/[[...route]]/route.ts
- packages/utils/tsdown.config.ts
- packages/database/types/index.ts
- packages/web-backend/src/index.ts
- apps/web-cluster/src/runner/index.ts
- packages/web-backend/package.json
- packages/utils/package.json
- packages/web-backend/src/Http/Live.ts
- packages/web-backend/src/S3Buckets/S3BucketAccess.ts
- apps/web-cluster/src/cluster/container-metadata.ts
- apps/web-cluster/src/runner/health-server.ts
- packages/web-backend/src/Http/Errors.ts
- packages/web-backend/src/Workflows.ts
- infra/package.json
- packages/web-domain/src/index.ts
🧰 Additional context used
📓 Path-based instructions (5)
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and data fetching in the web app
Web mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData rather than broad invalidations
Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components
Files:
apps/web/lib/EffectRuntime.tsapps/web/lib/tracing.tsapps/web/instrumentation.tsapps/web/lib/server.ts
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (CLAUDE.md)
Do not add inline, block, or docstring comments in any language; code must be self-explanatory
Files:
apps/web/lib/EffectRuntime.tspackages/web-domain/src/Video.tsapps/web/lib/tracing.tspackages/web-domain/src/S3Bucket.tsinfra/sst.config.tsapps/web/instrumentation.tspackages/web-domain/src/Policy.tspackages/web-backend/src/S3Buckets/index.tsapps/web-cluster/src/shared/database.tspackages/web-domain/src/Loom.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/Database.tsinfra/sst-env.d.tspackages/web-backend/src/Loom/Http.tspackages/web-domain/src/Http/Api.tspackages/web-domain/src/Database.tspackages/web-backend/src/Loom/ImportVideo.tspackages/env/build.tsapps/web/lib/server.tspackages/web-backend/src/Auth.tspackages/database/tsdown.config.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; leverage shared types from packages
**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by runningpnpm format.
Files:
apps/web/lib/EffectRuntime.tspackages/web-domain/src/Video.tsapps/web/lib/tracing.tspackages/web-domain/src/S3Bucket.tsinfra/sst.config.tsapps/web/instrumentation.tspackages/web-domain/src/Policy.tspackages/web-backend/src/S3Buckets/index.tsapps/web-cluster/src/shared/database.tspackages/web-domain/src/Loom.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/Database.tsinfra/sst-env.d.tspackages/web-backend/src/Loom/Http.tspackages/web-domain/src/Http/Api.tspackages/web-domain/src/Database.tspackages/web-backend/src/Loom/ImportVideo.tspackages/env/build.tsapps/web/lib/server.tspackages/web-backend/src/Auth.tspackages/database/tsdown.config.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g.,user-menu.tsx).
Use PascalCase for React/Solid components.
Files:
apps/web/lib/EffectRuntime.tspackages/web-domain/src/Video.tsapps/web/lib/tracing.tspackages/web-domain/src/S3Bucket.tsinfra/sst.config.tsapps/web/instrumentation.tspackages/web-domain/src/Policy.tspackages/web-backend/src/S3Buckets/index.tsapps/web-cluster/src/shared/database.tspackages/web-domain/src/Loom.tspackages/web-domain/scripts/generate-openapi.tspackages/web-backend/src/Database.tsinfra/sst-env.d.tspackages/web-backend/src/Loom/Http.tspackages/web-domain/src/Http/Api.tspackages/web-domain/src/Database.tspackages/web-backend/src/Loom/ImportVideo.tspackages/env/build.tsapps/web/lib/server.tspackages/web-backend/src/Auth.tspackages/database/tsdown.config.ts
apps/web/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
On the client, always use
useEffectQueryoruseEffectMutationfrom@/lib/EffectRuntime; never callEffectRuntime.run*directly in components.
Files:
apps/web/lib/EffectRuntime.tsapps/web/lib/tracing.tsapps/web/instrumentation.tsapps/web/lib/server.ts
🧠 Learnings (1)
📚 Learning: 2025-09-22T14:17:47.407Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.407Z
Learning: Applies to apps/web/**/*.{ts,tsx} : Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components
Applied to files:
apps/web/lib/EffectRuntime.ts
🧬 Code graph analysis (9)
apps/web/lib/EffectRuntime.ts (1)
apps/web/lib/Rpcs.ts (1)
Rpc(11-14)
packages/web-domain/src/S3Bucket.ts (2)
packages/web-backend/src/Workflows.ts (1)
Workflows(9-9)packages/web-domain/src/index.ts (1)
S3Error(14-14)
apps/web/instrumentation.ts (1)
apps/web/instrumentation.node.ts (1)
register(16-41)
packages/web-domain/src/Loom.ts (8)
packages/web-domain/src/User.ts (2)
UserId(3-3)UserId(4-4)packages/web-domain/src/Organisation.ts (2)
OrganisationId(3-5)OrganisationId(6-6)packages/web-domain/src/Video.ts (1)
Video(16-59)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/S3Bucket.ts (1)
S3Error(21-23)packages/web-domain/src/Http/Errors.ts (1)
InternalServerError(4-8)packages/web-domain/src/Policy.ts (1)
PolicyDeniedError(20-24)packages/web-domain/src/Authentication.ts (1)
HttpAuthMiddleware(17-27)
packages/web-domain/scripts/generate-openapi.ts (1)
packages/web-domain/src/Http/Api.ts (1)
ApiContract(4-12)
packages/web-backend/src/Loom/Http.ts (4)
packages/web-backend/src/Workflows.ts (1)
Workflows(9-9)packages/web-domain/src/Authentication.ts (1)
CurrentUser(8-15)packages/web-domain/src/Policy.ts (1)
Policy(8-12)packages/web-backend/src/Http/Errors.ts (1)
handleDomainError(4-14)
packages/web-domain/src/Http/Api.ts (1)
packages/web-domain/src/Loom.ts (1)
LoomHttpApi(69-92)
packages/web-backend/src/Loom/ImportVideo.ts (7)
packages/web-backend/src/Videos/index.ts (1)
Videos(11-171)packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets(13-205)packages/web-backend/src/Workflows.ts (1)
HttpClient(24-27)packages/web-domain/src/Loom.ts (2)
VideoURLInvalidError(25-29)ExternalLoomError(19-23)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)packages/web-domain/src/Video.ts (1)
Video(16-59)packages/web-domain/src/S3Bucket.ts (2)
S3Bucket(7-15)S3Error(21-23)
apps/web/lib/server.ts (2)
packages/web-backend/src/Workflows.ts (2)
Workflows(9-9)RpcClient(13-19)apps/web/lib/tracing.ts (1)
layerTracer(5-32)
🪛 GitHub Check: Typecheck
packages/web-backend/src/Loom/Http.ts
[failure] 35-35:
Argument of type 'string' is not assignable to parameter of type 'string & Brand<"OrganisationId">'.
packages/database/tsdown.config.ts
[failure] 2-2:
File '/home/runner/work/Cap/Cap/packages/database/package.json' is not listed within the file list of project '/home/runner/work/Cap/Cap/packages/database/tsconfig.json'. Projects must list all files or use an 'include' pattern.
🪛 Gitleaks (8.28.0)
infra/sst.config.ts
[high] 11-11: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (16)
packages/web-domain/tsconfig.json (1)
4-4: LGTMAdding the
scripts/*glob ensures the new scripts participate in the TypeScript build without impacting existing paths.packages/web-backend/src/S3Buckets/index.ts (2)
89-89: LGTM! Makes CloudFront bucket URL configurable.Adding
bucketUrltocloudfrontEnvsremoves hardcoding and improves configuration flexibility.
104-104: LGTM! Uses configurable bucket URL.Replacing the hardcoded S3 bucket URL with
cloudfrontEnvs.bucketUrlis the correct approach and aligns with the configuration change on line 89.packages/web-domain/src/S3Bucket.ts (1)
17-17: Verify no code depends on S3Bucket workflow
Run the following script to ensure no remaining references:#!/bin/bash # Search for imports or usage of S3Bucket in the Workflows export rg -nP -C3 'from.*S3Bucket.*Workflows|Workflows.*S3Bucket' -g '*.ts' -g '*.tsx' # Search for any workflow registration or handling of S3Bucket ast-grep --pattern 'Workflows.$_($$$S3Bucket$$$)'apps/web-cluster/src/shared/database.ts (1)
4-10: LGTM!The
DatabaseLivelayer correctly usesLayer.unwrapEffectwithEffect.gen, properly handles the requiredDATABASE_URLconfig with redaction, and returns a configuredMysqlClient.layer.apps/web/next.config.mjs (1)
20-21: LGTM: New package transpilation added.Adding
@cap/envand@cap/databasetotranspilePackagesaligns with the PR's introduction of environment and database capabilities.packages/web-backend/src/Database.ts (1)
1-15: LGTM! Error consolidation improves maintainability.Moving
DatabaseErrorto the domain package centralizes error definitions and eliminates duplication across packages.apps/web/lib/server.ts (2)
53-69: Verify REMOTE_WORKFLOW_URL configuration for production.The
WorkflowRpcLivelayer defaults tohttp://127.0.0.1:42169whenREMOTE_WORKFLOW_URLis not set. Ensure this environment variable is configured correctly for staging and production deployments to point to the actual workflow cluster service.
71-82: LGTM! Tracing refactor aligns with Effect patterns.Replacing
TracingLayerwith layer-based composition vialayerTracer(provided at line 135) follows Effect ecosystem best practices for resource management.apps/web/lib/tracing.ts (1)
1-32: LGTM! Layer-based tracing follows Effect best practices.The refactor from config-based to
Layer-based tracer composition properly integrates with the Effect runtime and OpenTelemetry provider. The use ofLayer.unwrapEffectfor deferred initialization is correct.Based on learnings
packages/web-backend/src/Loom/ImportVideo.ts (1)
91-96: LGTM! Exponential backoff for database operations.Adding retry logic with exponential backoff for
CreateVideoRecordimproves resilience against transient database failures.infra/sst.config.ts (5)
83-100: LGTM!The OIDC provider logic correctly handles both existing and new provider scenarios. The
retainOnDeleteoption appropriately protects the provider from accidental deletion.
149-169: Conditional environment variable setup looks correctThe staging/production conditional logic and Vercel environment variable mapping are implemented correctly.
229-286: Infrastructure setup is well-structuredThe VPC, DNS namespace, security groups, ECS cluster, and Aurora database configuration follow AWS best practices. Container insights and proper subnet segregation are good observability and security choices.
387-411: Restart policy transform looks correctThe taskDefinition transform properly parses the JSON containerDefinitions, applies the restart policy to each container, and re-stringifies. The pattern of composing with
ghcrCredentialsTransform.taskDefinition(args)at the end correctly chains transforms.
414-421: Verify $default route is appropriate for productionThe API Gateway uses a
$defaultroute (line 417) which catches all requests. Verify this is the intended behavior for production, or if specific route patterns should be defined instead for better security and routing control.
| // "@cap/web-api-contract", | ||
| // "@cap/web-domain", | ||
| // "@cap/web-backend", | ||
| "@cap/database", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify the intentional inconsistency in package optimization.
@cap/web-api-contract and @cap/web-domain remain in transpilePackages (lines 18-19) but are commented out here in optimizePackageImports. This means these packages will be transpiled but won't benefit from Next.js's import optimization (tree-shaking, selective imports).
Please confirm whether:
- This is intentional (e.g., these packages have side effects or export patterns that don't work well with import optimization)
- These packages should be removed from
transpilePackagesas well - The comment about
@cap/web-backend(line 36) indicates a plan to re-enable it
Run the following script to check if these packages are still being used in the codebase:
🏁 Script executed:
#!/bin/bash
# Check usage of the commented-out packages to understand if they should remain in transpilePackages
echo "=== Checking @cap/web-api-contract usage ==="
rg -nP --type=ts --type=tsx -C2 "from ['\"]@cap/web-api-contract" apps/web/
echo -e "\n=== Checking @cap/web-domain usage ==="
rg -nP --type=ts --type=tsx -C2 "from ['\"]@cap/web-domain" apps/web/
echo -e "\n=== Checking @cap/web-backend usage ==="
rg -nP --type=ts --type=tsx -C2 "from ['\"]@cap/web-backend" apps/web/Length of output: 209
🏁 Script executed:
#!/bin/bash
# Re-run usage checks without file-type filters
echo "=== Checking imports of @cap/web-api-contract ==="
rg -nP -C2 "from ['\"]@cap/web-api-contract" -g "*.ts" -g "*.tsx" apps/web/ || echo "No import statements found"
echo -e "\n=== Checking general mentions of @cap/web-api-contract ==="
rg -nP -C2 "@cap/web-api-contract" -g "*.ts" -g "*.tsx" apps/web/ || echo "No mentions found"
echo -e "\n=== Checking imports of @cap/web-domain ==="
rg -nP -C2 "from ['\"]@cap/web-domain" -g "*.ts" -g "*.tsx" apps/web/ || echo "No import statements found"
echo -e "\n=== Checking general mentions of @cap/web-domain ==="
rg -nP -C2 "@cap/web-domain" -g "*.ts" -g "*.tsx" apps/web/ || echo "No mentions found"
echo -e "\n=== Checking imports of @cap/web-backend ==="
rg -nP -C2 "from ['\"]@cap/web-backend" -g "*.ts" -g "*.tsx" apps/web/ || echo "No import statements found"
echo -e "\n=== Checking general mentions of @cap/web-backend ==="
rg -nP -C2 "@cap/web-backend" -g "*.ts" -g "*.tsx" apps/web/ || echo "No mentions found"Length of output: 136039
Uncomment packages in optimizePackageImports to match transpilePackages
@cap/web-api-contractand@cap/web-domainare heavily imported acrossapps/web—re-enable them underoptimizePackageImportsto restore tree-shaking and selective imports.- Align the commented-out
@cap/web-backendentry with its planned re-enablement as noted in the comment.
🤖 Prompt for AI Agents
In apps/web/next.config.mjs around lines 34 to 37, the optimizePackageImports
array has @cap/web-api-contract and @cap/web-domain commented out (and
@cap/web-backend left commented) which causes mismatches with transpilePackages
and prevents tree-shaking; uncomment @cap/web-api-contract and @cap/web-domain
in optimizePackageImports and also uncomment or otherwise re-enable
@cap/web-backend to match the intended re-enablement so optimizePackageImports
mirrors transpilePackages and restores selective imports.
| const runner = new sst.aws.Service("Runner", { | ||
| cluster, | ||
| capacity: "spot", | ||
| cpu: "0.25 vCPU", | ||
| memory: "1 GB", | ||
| architecture: "arm64", | ||
| serviceRegistry: { port: 42169 }, | ||
| image: "ghcr.io/brendonovich/cap-web-cluster:latest", | ||
| command: ["src/runner/index.ts"], | ||
| health: { | ||
| command: ["CMD", "deno", "run", "--allow-all", "src/health-check.ts"], | ||
| }, | ||
| environment: { | ||
| ...commonEnvironment, | ||
| SHARD_MANAGER_HOST: generateServiceHostname("ShardManager"), | ||
| PORT: "42069", | ||
| HEALTH_CHECK_PORT: "3000", | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify serviceRegistry port configuration
Line 369 configures serviceRegistry: { port: 42169 }, but line 378 sets the application PORT: "42069". The serviceRegistry port should typically match the port the application listens on for service discovery to work correctly. Verify this is intentional or if line 369 should be 42069.
If this is a typo, apply this diff:
const runner = new sst.aws.Service("Runner", {
cluster,
capacity: "spot",
cpu: "0.25 vCPU",
memory: "1 GB",
architecture: "arm64",
- serviceRegistry: { port: 42169 },
+ serviceRegistry: { port: 42069 },
image: "ghcr.io/brendonovich/cap-web-cluster:latest",
command: ["src/runner/index.ts"],🏁 Script executed:
#!/bin/bash
set -e
echo "=== Runner service context (lines 350-400) ==="
sed -n '350,400p' infra/sst.config.ts
echo
echo "=== All serviceRegistry occurrences ==="
rg -n "serviceRegistry" infra/sst.config.ts
echo
echo "=== All PORT: declarations ==="
rg -n 'PORT:' infra/sst.config.tsLength of output: 1600
Align serviceRegistry port with application PORT
Change infra/sst.config.ts:369 from
- serviceRegistry: { port: 42169 },
+ serviceRegistry: { port: 42069 },so it matches the PORT: "42069" env var (line 378).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const runner = new sst.aws.Service("Runner", { | |
| cluster, | |
| capacity: "spot", | |
| cpu: "0.25 vCPU", | |
| memory: "1 GB", | |
| architecture: "arm64", | |
| serviceRegistry: { port: 42169 }, | |
| image: "ghcr.io/brendonovich/cap-web-cluster:latest", | |
| command: ["src/runner/index.ts"], | |
| health: { | |
| command: ["CMD", "deno", "run", "--allow-all", "src/health-check.ts"], | |
| }, | |
| environment: { | |
| ...commonEnvironment, | |
| SHARD_MANAGER_HOST: generateServiceHostname("ShardManager"), | |
| PORT: "42069", | |
| HEALTH_CHECK_PORT: "3000", | |
| }, | |
| const runner = new sst.aws.Service("Runner", { | |
| cluster, | |
| capacity: "spot", | |
| cpu: "0.25 vCPU", | |
| memory: "1 GB", | |
| architecture: "arm64", | |
| serviceRegistry: { port: 42069 }, | |
| image: "ghcr.io/brendonovich/cap-web-cluster:latest", | |
| command: ["src/runner/index.ts"], | |
| health: { | |
| command: ["CMD", "deno", "run", "--allow-all", "src/health-check.ts"], | |
| }, | |
| environment: { | |
| ...commonEnvironment, | |
| SHARD_MANAGER_HOST: generateServiceHostname("ShardManager"), | |
| PORT: "42069", | |
| HEALTH_CHECK_PORT: "3000", | |
| }, |
🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 363 to 380, the serviceRegistry port is set
to 42169 but the container PORT environment variable is "42069"; update the
serviceRegistry.port value to 42069 so the service registry and application port
are aligned (ensure the port is the numeric value 42069, matching PORT:
"42069").
| "node": "24" | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Align Node engine constraints across the workspace
Root now requires Node 24, but several packages (for example packages/database/package.json) still declare "node": "20". On a Node 24 runtime this violates their engine range—pnpm will emit errors when engine-strict=true (common in CI) and developers will get downgrade warnings locally. Please update every package-level engines.node entry to a compatible semver range (e.g. ">=24"), and adjust any .nvmrc/CI images accordingly so the workspace has a single supported runtime.
🏁 Script executed:
#!/bin/bash
# List workspace package.json files whose engine field still pins Node 20
fd -e json package.json | while read -r file; do
jq -r 'select(.engines?.node == "20") | "\(input_filename)"' "$file"
doneLength of output: 164
Align Node engine constraints across all workspace packages
Root package.json pins engines.node to "24", but these packages still declare "20": apps/tasks/package.json, apps/web/package.json, packages/database/package.json, packages/local-docker/package.json, packages/ui/package.json, packages/utils/package.json. This mismatch triggers errors under Node 24 when pnpm’s engine-strict=true. Update each package’s engines.node to a semver range compatible with Node 24 (e.g. ">=24"), and synchronize .nvmrc/CI images accordingly.
🤖 Prompt for AI Agents
In package.json around lines 40 to 41, the root engines.node is set to "24" but
several workspace packages still declare "20"; update apps/tasks/package.json,
apps/web/package.json, packages/database/package.json,
packages/local-docker/package.json, packages/ui/package.json, and
packages/utils/package.json to use a semver range compatible with Node 24 (for
example ">=24" or ">=24.0.0") in their engines.node field, and also update the
project .nvmrc and CI image tags to match the chosen Node range so all packages
and environments are consistent.
| Effect.gen(function* () { | ||
| const fs = yield* FileSystem.FileSystem; | ||
|
|
||
| const spec = OpenApi.fromApi(ApiContract); | ||
|
|
||
| yield* fs.writeFileString( | ||
| process.argv[2] ?? "openapi.json", | ||
| JSON.stringify(spec, null, 2), | ||
| ); | ||
| }).pipe(Effect.provide(NodeContext.layer), NodeRuntime.runMain); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Provide the Node file system layer.
FileSystem.FileSystem isn’t available with only NodeContext.layer, so the script throws at runtime. Add NodeFileSystem.layer (or the appropriate platform FS layer) before NodeRuntime.runMain.
-import { NodeContext, NodeRuntime } from "@effect/platform-node";
+import { NodeContext, NodeFileSystem, NodeRuntime } from "@effect/platform-node";
@@
-}).pipe(Effect.provide(NodeContext.layer), NodeRuntime.runMain);
+}).pipe(
+ Effect.provide(NodeContext.layer),
+ Effect.provide(NodeFileSystem.layer),
+ NodeRuntime.runMain,
+);🤖 Prompt for AI Agents
In packages/web-domain/scripts/generate-openapi.ts around lines 7 to 16, the
script provides only NodeContext.layer to Effect.provide but needs the platform
file system layer as well; update the provided layers to include
NodeFileSystem.layer (or the appropriate Node FS layer used in the project) in
the Effect.provide call before invoking NodeRuntime.runMain so
FileSystem.FileSystem is available at runtime.
Summary by CodeRabbit
New Features
Improvements
Chores
Removals