Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions apps/web-cluster/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
FROM node:24-slim AS base
RUN corepack enable

FROM base AS builder
WORKDIR /app
COPY . .

RUN corepack enable pnpm

RUN echo "inject-workspace-packages=true" >> .npmrc
RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store pnpm i

RUN pnpm run --filter=@cap/web-cluster build
RUN pnpm deploy --filter=@cap/web-cluster out
RUN cd out && node scripts/post-deploy.ts
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Post-deploy script path inconsistency.

Line 15 runs node scripts/post-deploy.ts, but this is a TypeScript file. The script should either be:

  1. Built to JavaScript first (e.g., scripts/post-deploy.js)
  2. Run with a TypeScript runtime (e.g., tsx or deno)

Based on the file at apps/web-cluster/scripts/post-deploy.ts, it appears the build process should transpile this to JavaScript.

Verify the build output structure:


🏁 Script executed:

#!/bin/bash
# Description: Check if post-deploy script is being built

# Look for tsconfig or build config that handles scripts
fd -e json "tsconfig" apps/web-cluster -x cat {}

# Check if the out directory structure includes built scripts
rg -n "scripts" apps/web-cluster/package.json -A 3

Length of output: 697


🏁 Script executed:

#!/bin/bash
# Show Dockerfile header and context around the `out` directory usage
sed -n '1,50p' apps/web-cluster/Dockerfile
rg -nC2 'out' apps/web-cluster/Dockerfile

# Inspect base TypeScript config to confirm emit settings
fd base.tsconfig.json -x cat {}

Length of output: 1387


🏁 Script executed:

sed -n '1,200p' apps/web-cluster/package.json

Length of output: 1074


🏁 Script executed:

#!/bin/bash
# List scripts directory
ls -R apps/web-cluster/scripts

# Inspect post-deploy.ts
sed -n '1,200p' apps/web-cluster/scripts/post-deploy.ts

Length of output: 873


🏁 Script executed:

#!/bin/bash
# Show root package.json to locate deploy script
sed -n '1,200p' package.json

Length of output: 1956


Fix post-deploy script and tighten Deno permissions

  • Dockerfile line 15 runs node scripts/post-deploy.ts but tsconfig is set to emit only declarations (no .js). Either emit a JS build for scripts/post-deploy.js or invoke it with a TS-aware runner (e.g. npx tsx scripts/post-deploy.ts).
  • Dockerfile line 24’s ENTRYPOINT ["deno","run","--allow-all"] is overly permissive; restrict to only the flags your app needs (e.g. --allow-read, --allow-net).
🤖 Prompt for AI Agents
In apps/web-cluster/Dockerfile around lines 15 and 24, line 15 runs "node
scripts/post-deploy.ts" but your tsconfig emits only declarations so there is no
.js to run; either change the build to emit a JS file and run that (e.g., build
scripts to out/scripts/post-deploy.js and run node out/scripts/post-deploy.js)
or invoke the TS directly with a ts-aware runner (e.g., replace the command with
npx tsx scripts/post-deploy.ts). Also tighten the Deno ENTRYPOINT at line 24 by
replacing the overly-permissive --allow-all with only the necessary flags (for
example --allow-read and --allow-net or any other minimal permissions your app
requires).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai node v24 is able to run ts files

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🌐 Web query:

Does Node.js version 24 support running TypeScript files directly?

💡 Result:

Yes. Node.js v24 includes built-in TypeScript support (type‑stripping) so you can run .ts files directly for "erasable" TypeScript syntax (type annotations). Features that require JS code generation (enums, namespaces, etc.) still need transformation (flags or a runner/third‑party tool). See Node docs for details and limitations. [1][2][3]

Sources
[1] Node.js — Running TypeScript Natively.
[2] Node.js API: Modules — TypeScript (v24 docs).
[3] Node.js 23.6 Now Runs TypeScript Natively (news coverage).


@Brendonovich, you're absolutely right! Node.js v24 includes built-in TypeScript support through type-stripping, so it can run .ts files directly. [1][2] Thank you for the correction—I've noted this for future reviews.


Sources:
[1] Node.js — Running TypeScript Natively
[2] Node.js API: Modules — TypeScript (v24 docs)
[3] Node.js 23.6 Now Runs TypeScript Natively


✏️ Learnings added
Learnt from: Brendonovich
PR: CapSoftware/Cap#1165
File: apps/web-cluster/Dockerfile:15-15
Timestamp: 2025-10-10T02:06:05.419Z
Learning: Node.js version 24 and later includes built-in TypeScript support (type-stripping) and can run .ts files directly without requiring a separate TypeScript runtime or transpilation step for basic type annotations. The command `node script.ts` works natively in Node.js v24+.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


FROM denoland/deno:2.5.3 AS runner
WORKDIR /app

COPY --from=builder --chown=deno:deno /app/out /app

USER deno

ENTRYPOINT ["deno", "run", "--allow-all"]
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Container cannot start: ENTRYPOINT lacks a script.

deno run --allow-all without a target module exits immediately with “Missing script URL” when the container starts, so the image never boots. Please bake the actual runner module into the ENTRYPOINT (or provide a CMD here) so the image can launch on its own.

🤖 Prompt for AI Agents
In apps/web-cluster/Dockerfile around line 24, the ENTRYPOINT currently is just
["deno", "run", "--allow-all"] which causes the container to exit because no
script/module is supplied; update the Dockerfile to include the actual runner
module in the ENTRYPOINT (e.g. ["deno","run","--allow-all","/path/to/app.ts"])
or keep the ENTRYPOINT arguments and add a CMD with the default module path so
the image can start standalone; ensure the path matches where the build copies
the app into the image (use an absolute container path) and keep the exec JSON
form.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai this is done so we can choose which script to launch when running the container

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Brendonovich, thank you for clarifying! That makes perfect sense—leaving the ENTRYPOINT flexible allows you to specify which script to run at container launch time (e.g., docker run ... src/shard-manager.ts vs. src/runner/index.ts), which is ideal for a cluster deployment where the same image serves multiple roles. I appreciate the explanation!


✏️ Learnings added
Learnt from: Brendonovich
PR: CapSoftware/Cap#1165
File: apps/web-cluster/Dockerfile:24-24
Timestamp: 2025-10-10T02:06:26.386Z
Learning: In apps/web-cluster/Dockerfile, the ENTRYPOINT ["deno", "run", "--allow-all"] without a script path is intentional to allow choosing which script to launch when running the container, enabling the same image to be used for different services.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restrict Deno permissions in production.

The ENTRYPOINT uses --allow-all, which grants all permissions to the Deno runtime. This is a security risk in production. Specify only the minimum required permissions.

Apply this approach to restrict permissions:

-ENTRYPOINT ["deno", "run", "--allow-all"]
+ENTRYPOINT ["deno", "run", "--allow-env", "--allow-net", "--allow-read=/app", "--allow-write=/tmp"]

Adjust permissions based on the actual runtime requirements of your application. Review what the application needs access to and grant only those permissions.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ENTRYPOINT ["deno", "run", "--allow-all"]
ENTRYPOINT ["deno", "run", "--allow-env", "--allow-net", "--allow-read=/app", "--allow-write=/tmp"]
🤖 Prompt for AI Agents
In apps/web-cluster/Dockerfile around line 24, the ENTRYPOINT uses "deno run
--allow-all" which grants excessive permissions; replace --allow-all with a
minimal set of explicit Deno flags required at runtime (for example only
--allow-net with specific hosts/ports, --allow-read limited to necessary
directories, --allow-env for needed env vars, and any needed --allow-write or
--allow-run scoped to specific paths) and update the ENTRYPOINT to use those
explicit flags; also document the chosen permissions in the Dockerfile and
CI/deployment notes and avoid wildcard network or filesystem grants.


EXPOSE 8080
EXPOSE 42069
EXPOSE 42169
31 changes: 31 additions & 0 deletions apps/web-cluster/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{
"name": "@cap/web-cluster",
"type": "module",
"scripts": {
"dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-all --watch ./src/runner/index.ts\" \"deno run --allow-all --watch ./src/shard-manager.ts\"",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid --allow-all in development scripts.

The --allow-all flag grants all permissions to Deno, which is overly permissive even in development. Specify only the required permissions (e.g., --allow-env, --allow-net, --allow-read, --allow-write).

Consider applying this approach:

-  "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-all --watch ./src/runner/index.ts\" \"deno run --allow-all --watch ./src/shard-manager.ts\"",
+  "dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/runner/index.ts\" \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/shard-manager.ts\"",

Adjust permissions based on actual requirements.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-all --watch ./src/runner/index.ts\" \"deno run --allow-all --watch ./src/shard-manager.ts\"",
"dev": "pnpm dotenv -e ../../.env -- concurrently \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/runner/index.ts\" \"deno run --allow-env --allow-net --allow-read --allow-write --watch ./src/shard-manager.ts\"",
🤖 Prompt for AI Agents
In apps/web-cluster/package.json around line 5, the dev script uses Deno's
--allow-all which is too permissive; replace --allow-all with the minimal
required permissions (for example --allow-env --allow-net --allow-read
--allow-write) for each deno run command, audit the runner and shard-manager to
identify only the needed permissions and add them explicitly to the script, and
verify the commands still work locally after tightening permissions.

"build": "pnpm run --filter @cap/web-cluster^... build",
"build:docker": "cd ../.. && docker build -f apps/web-cluster/Dockerfile -t ghcr.io/brendonovich/cap-web-cluster:latest ."
},
"dependencies": {
"@cap/web-backend": "workspace:*",
"@cap/web-domain": "workspace:*",
"@effect/cluster": "^0.49.1",
"@effect/opentelemetry": "^0.56.1",
"@effect/platform": "^0.91.1",
"@effect/platform-node": "^0.96.1",
"@effect/platform-node-shared": "^0.51.4",
"@effect/rpc": "^0.70.0",
"@effect/sql-mysql2": "^0.45.1",
"@effect/workflow": "^0.10.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.203.0",
"@opentelemetry/sdk-trace-base": "^2.0.1",
"effect": "^3.17.14",
"rolldown": "1.0.0-beta.42",
"tsdown": "^0.15.6",
"tsup": "^8.5.0"
},
"devDependencies": {
"concurrently": "^9.2.1",
"dotenv-cli": "^10.0.0"
}
}
24 changes: 24 additions & 0 deletions apps/web-cluster/scripts/post-deploy.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import { FileSystem } from "@effect/platform";
import { NodeContext, NodeRuntime } from "@effect/platform-node";
import { Effect } from "effect";
Comment on lines +1 to +3
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Provide NodeFileSystem; otherwise FileSystem service is missing at runtime

The script acquires FileSystem but never provides a Node implementation. Add NodeFileSystem.layer.

Apply:

-import { NodeContext, NodeRuntime } from "@effect/platform-node";
+import { NodeContext, NodeRuntime, NodeFileSystem } from "@effect/platform-node";
@@
-}).pipe(Effect.provide(NodeContext.layer), NodeRuntime.runMain);
+}).pipe(
+  Effect.provide(NodeFileSystem.layer),
+  Effect.provide(NodeContext.layer),
+  NodeRuntime.runMain,
+);

Also applies to: 24-24

🤖 Prompt for AI Agents
In apps/web-cluster/scripts/post-deploy.ts around lines 1-3 (and also update
line 24), the FileSystem service is acquired but no Node implementation is
provided; add the NodeFileSystem.layer to the runtime/environment composition so
the FileSystem dependency is satisfied at runtime. Modify the
provision/composition of NodeContext/NodeRuntime to include NodeFileSystem.layer
(import it from @effect/platform-node as needed) alongside existing layers so
the script runs with a concrete FileSystem implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai NodeContext.layer provides FileSystem, among other things

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for NodeContext usage and documentation in the codebase
rg -n "NodeContext\.layer" -A 5 -B 5

# Look for any FileSystem usage patterns with NodeContext
rg -n "FileSystem.*NodeContext|NodeContext.*FileSystem" -A 3 -B 3

# Check if there are other scripts using similar patterns
fd -e ts -e js | xargs rg -l "NodeContext.*FileSystem" | head -10

Length of output: 929


🏁 Script executed:

#!/bin/bash
# Check if NodeFileSystem is used anywhere in the codebase
rg -n "NodeFileSystem" --type ts

# Look at the generate-openapi.ts script to see full pattern
cat packages/web-domain/scripts/generate-openapi.ts

# Search for Effect documentation or comments about NodeContext
rg -n "NodeContext" -B 2 -A 2 | head -30

Length of output: 1688


@Brendonovich, you're absolutely right! I can see from the codebase that packages/web-domain/scripts/generate-openapi.ts uses the exact same pattern—acquiring FileSystem and providing only NodeContext.layer—confirming that NodeContext.layer includes the FileSystem service. My original comment was incorrect. Thank you for the correction!


✏️ Learnings added
Learnt from: Brendonovich
PR: CapSoftware/Cap#1165
File: apps/web-cluster/scripts/post-deploy.ts:1-3
Timestamp: 2025-10-10T02:04:45.547Z
Learning: In Effect's effect/platform-node library, NodeContext.layer provides the FileSystem service (among other Node.js platform services), so there's no need to explicitly provide NodeFileSystem.layer when using FileSystem in Node.js scripts that already provide NodeContext.layer.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


Effect.gen(function* () {
const fs = yield* FileSystem.FileSystem;

const dotPnpm = "./node_modules/.pnpm";
const deps = yield* fs.readDirectory(dotPnpm);
const capDeps = deps.filter((dep) => dep.startsWith("@cap"));
Comment on lines +9 to +10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Harden pnpm entry detection and path derivation; pretty-print JSON

  • startsWith("@cap") will miss common pnpm folder names (e.g., registry.npmjs.org+@cap+pkg@…). Use a regex test.
  • Replace all "+" when deriving the package path.
  • Pretty-print JSON for readability.
-const deps = yield* fs.readDirectory(dotPnpm);
-const capDeps = deps.filter((dep) => dep.startsWith("@cap"));
+const deps = yield* fs.readDirectory(dotPnpm);
+const capDeps = deps.filter((dep) => /(^|[+])@cap[+]/.test(dep));
@@
-const pkgName = key.split("@file")[0].replace("+", "/");
+const pkgName = key.split("@file")[0].replace(/\+/g, "/");
@@
-yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson));
+yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson, null, 2) + "\n");

Optionally, skip missing paths to avoid hard-fail:

// before readFileString
if (!(yield* fs.exists(pkgJsonPath))) continue;

Also applies to: 13-15, 22-22


for (const key of capDeps) {
const pkgName = key.split("@file")[0].replace("+", "/");
const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`;
Comment on lines +13 to +14
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fragile path parsing logic.

The string manipulation assumes a specific format (@file delimiter, + separator) without validation. If the pnpm store format changes or package names contain unexpected characters, this will silently fail or produce incorrect paths.

Apply this diff to add validation:

 for (const key of capDeps) {
-  const pkgName = key.split("@file")[0].replace("+", "/");
+  if (!key.includes("@file")) {
+    console.warn(`Skipping unexpected format: ${key}`);
+    continue;
+  }
+  const pkgName = key.split("@file")[0].replace(/\+/g, "/");
   const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const pkgName = key.split("@file")[0].replace("+", "/");
const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`;
for (const key of capDeps) {
if (!key.includes("@file")) {
console.warn(`Skipping unexpected format: ${key}`);
continue;
}
const pkgName = key.split("@file")[0].replace(/\+/g, "/");
const pkgJsonPath = `${dotPnpm}/${key}/node_modules/${pkgName}/package.json`;


let pkgJson = JSON.parse(yield* fs.readFileString(pkgJsonPath));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use Effect-based JSON parsing for proper error handling.

JSON.parse can throw synchronous exceptions, which bypasses Effect's error handling. Use Effect.try or parse within the existing Effect.tryPromise wrapper.

Apply this diff:

-  let pkgJson = JSON.parse(yield* fs.readFileString(pkgJsonPath));
+  let pkgJson = yield* Effect.try({
+    try: () => JSON.parse(yield* fs.readFileString(pkgJsonPath)),
+    catch: (error) => new Error(`Failed to parse ${pkgJsonPath}: ${error}`)
+  });

Or use Schema.parseJson from @effect/schema for typed validation.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web-cluster/scripts/post-deploy.ts around line 16, JSON.parse is called
directly on the file contents which can throw synchronously and escape Effect's
error handling; wrap the parse in an Effect-aware construct (e.g., use
Effect.try or perform the parsing inside the existing Effect.tryPromise wrapper)
so parse errors are captured as Effect failures, or alternatively replace with
Schema.parseJson from @effect/schema to validate/parse the JSON within Effect;
update the surrounding Effect pipeline to return or map the resulting error type
instead of letting exceptions escape.


if (pkgJson.publishConfig) {
pkgJson = { ...pkgJson, ...pkgJson.publishConfig };
}

yield* fs.writeFileString(pkgJsonPath, JSON.stringify(pkgJson));
}
}).pipe(Effect.provide(NodeContext.layer), NodeRuntime.runMain);
66 changes: 66 additions & 0 deletions apps/web-cluster/src/cluster/container-metadata.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
import { Config, Data, Effect, Option } from "effect";

export class FetchIpError extends Data.TaggedError("FetchIpError")<{}> {}

class EcsContainerMetadata extends Effect.Service<EcsContainerMetadata>()(
"EcsContainerMetadata",
{
effect: Effect.gen(function* () {
return {
metadataUri: yield* Config.option(
Config.string("ECS_CONTAINER_METADATA_URI_V4"),
),
};
}),
},
) {}

export const privateIp = EcsContainerMetadata.pipe(
Effect.flatMap(({ metadataUri }) =>
Option.match(metadataUri, {
onNone: () => Effect.succeed("0.0.0.0"),
onSome: (uri) =>
Effect.tryPromise({
try: async () => {
const response = await fetch(`${uri}/task`);
const data = await response.json();
return data.Containers[0].Networks[0].IPv4Addresses[0] as string;
},
catch: (error) => {
console.error("error", error);
return new FetchIpError();
},
}),
}),
),
);

export class ContainerMetadata extends Effect.Service<ContainerMetadata>()(
"ContainerMetadata",
{
effect: Effect.gen(function* () {
const containerMetadata = yield* EcsContainerMetadata;
const metadataUri = containerMetadata.metadataUri;
const ipAddress = yield* Option.match(metadataUri, {
onNone: () => Effect.succeed("0.0.0.0"),
onSome: (uri) =>
Effect.tryPromise({
try: async () => {
const response = await fetch(`${uri}/task`);
const data = await response.json();
return data.Containers[0].Networks[0].IPv4Addresses[0] as string;
},
catch: (error) => {
console.error("error", error);
return new FetchIpError();
},
}),
});

const port = yield* Config.number("PORT").pipe(Config.withDefault(42069));

return { ipAddress, port };
}),
dependencies: [EcsContainerMetadata.Default],
},
) {}
27 changes: 27 additions & 0 deletions apps/web-cluster/src/health-check.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
async function checkHealth(): Promise<boolean> {
try {
const response = await fetch(
`http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
);
Comment on lines +3 to +5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Default the probe port the same way as the server

When HEALTH_CHECK_PORT isn’t set, the URL becomes http://127.0.0.1:undefined/health, so the fetch throws and the script exits with status 1 even though the server defaults to port 3000. Mirror the server’s fallback: compute a numeric port (with default 3000) and interpolate that value in the URL.

-		const response = await fetch(
-			`http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
-		);
+		const port = process.env.HEALTH_CHECK_PORT
+			? parseInt(process.env.HEALTH_CHECK_PORT, 10)
+			: 3000;
+		const response = await fetch(`http://127.0.0.1:${port}/health`);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const response = await fetch(
`http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
);
const port = process.env.HEALTH_CHECK_PORT
? parseInt(process.env.HEALTH_CHECK_PORT, 10)
: 3000;
const response = await fetch(`http://127.0.0.1:${port}/health`);
🤖 Prompt for AI Agents
In apps/web-cluster/src/health-check.ts around lines 3 to 5, the code
interpolates process.env.HEALTH_CHECK_PORT directly causing the URL to be
"http://127.0.0.1:undefined/health" when the env var is missing; compute a
numeric port variable with a fallback to 3000 (e.g.
parseInt(process.env.HEALTH_CHECK_PORT || '') || 3000) and use that port
variable in the fetch URL so the health check defaults to the server's port when
the env var is not set.

Comment on lines +3 to +5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing validation for HEALTH_CHECK_PORT environment variable.

The HEALTH_CHECK_PORT environment variable is read without validation or a default value. If undefined, the URL becomes http://127.0.0.1:undefined/health, causing the fetch to fail with a misleading error.

Apply this diff to add a default port:

 async function checkHealth(): Promise<boolean> {
 	try {
+		const port = process.env.HEALTH_CHECK_PORT || "3000";
 		const response = await fetch(
-			`http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
+			`http://127.0.0.1:${port}/health`,
 		);
 		return response.status === 200;

Alternatively, fail fast if the variable is missing:

 async function checkHealth(): Promise<boolean> {
+	const port = process.env.HEALTH_CHECK_PORT;
+	if (!port) {
+		throw new Error("HEALTH_CHECK_PORT environment variable is required");
+	}
 	try {
 		const response = await fetch(
-			`http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
+			`http://127.0.0.1:${port}/health`,
 		);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const response = await fetch(
`http://127.0.0.1:${process.env.HEALTH_CHECK_PORT}/health`,
);
async function checkHealth(): Promise<boolean> {
try {
const port = process.env.HEALTH_CHECK_PORT || "3000";
const response = await fetch(
`http://127.0.0.1:${port}/health`,
);
return response.status === 200;
} catch (error) {
console.error("Health check failed:", error);
return false;
}
}
🤖 Prompt for AI Agents
In apps/web-cluster/src/health-check.ts around lines 3 to 5, the code uses
process.env.HEALTH_CHECK_PORT directly when building the URL which can produce
"undefined" in the URL; validate and normalize this value first: read
process.env.HEALTH_CHECK_PORT into a local variable, if it's missing either
assign a sensible default (e.g., "3000") or throw an Error to fail fast, then
ensure it's a valid integer (parseInt and check for NaN) before interpolating
into the fetch URL so the constructed URL is always valid.

return response.status === 200;
} catch (error) {
console.error("Health check failed:", error);
return false;
}
}

// Run the health check
checkHealth()
.then((isHealthy) => {
if (isHealthy) {
console.log("✅ Service is healthy (200 OK)");
process.exit(0);
} else {
console.log("❌ Service is not healthy (non-200 response)");
process.exit(1);
}
})
.catch((error) => {
console.error("❌ Health check failed:", error);
process.exit(1);
});
22 changes: 22 additions & 0 deletions apps/web-cluster/src/runner/health-server.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import { createServer } from "node:http";
import { HttpRouter, HttpServer, HttpServerResponse } from "@effect/platform";
import { NodeHttpServer } from "@effect/platform-node";
import { Layer } from "effect";

// Define the router with a single route for the root URL
const router = HttpRouter.empty.pipe(
HttpRouter.get("/health", HttpServerResponse.text("ok")),
);

// Set up the application server
const app = router.pipe(HttpServer.serve());

// Specify the port
const port = process.env.HEALTH_CHECK_PORT
? parseInt(process.env.HEALTH_CHECK_PORT)
: 3000;

// Create a server layer with the specified port
const ServerLive = NodeHttpServer.layer(() => createServer(), { port });
Comment on lines +15 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate HEALTH_CHECK_PORT before binding

parseInt happily returns NaN/negative when HEALTH_CHECK_PORT is unset correctly. Passing that into NodeHttpServer.layer causes an immediate ERR_SOCKET_BAD_PORT, so the runner never boots. Sanitize the env value (fallback to default or throw with a clear message).

-const port = process.env.HEALTH_CHECK_PORT
-	? parseInt(process.env.HEALTH_CHECK_PORT)
-	: 3000;
+const rawPort = process.env.HEALTH_CHECK_PORT;
+const parsedPort = rawPort === undefined ? Number.NaN : Number.parseInt(rawPort, 10);
+const port =
+	Number.isSafeInteger(parsedPort) && parsedPort > 0 ? parsedPort : 3000;
+
+if (rawPort !== undefined && port === 3000) {
+	console.warn(
+		`HEALTH_CHECK_PORT "${rawPort}" is invalid; falling back to ${port}`,
+	);
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const port = process.env.HEALTH_CHECK_PORT
? parseInt(process.env.HEALTH_CHECK_PORT)
: 3000;
// Create a server layer with the specified port
const ServerLive = NodeHttpServer.layer(() => createServer(), { port });
const rawPort = process.env.HEALTH_CHECK_PORT;
const parsedPort = rawPort === undefined ? Number.NaN : Number.parseInt(rawPort, 10);
const port =
Number.isSafeInteger(parsedPort) && parsedPort > 0 ? parsedPort : 3000;
if (rawPort !== undefined && port === 3000) {
console.warn(
`HEALTH_CHECK_PORT "${rawPort}" is invalid; falling back to ${port}`,
);
}
// Create a server layer with the specified port
const ServerLive = NodeHttpServer.layer(() => createServer(), { port });
🤖 Prompt for AI Agents
In apps/web-cluster/src/runner/health-server.ts around lines 15 to 20, the port
parsed from process.env.HEALTH_CHECK_PORT is not validated so parseInt can yield
NaN or an invalid/negative port which causes ERR_SOCKET_BAD_PORT; update the
code to parse with an explicit radix, validate the result is a finite integer
within the valid TCP port range (1–65535), and either fallback to the default
3000 or throw a clear, descriptive error if the env value is present but
invalid; pass only the validated port into NodeHttpServer.layer so the server
never receives an invalid port.


export const HealthServerLive = Layer.provide(app, ServerLive);
99 changes: 99 additions & 0 deletions apps/web-cluster/src/runner/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
import { createServer } from "node:http";
import { Database, S3Buckets, Videos, Workflows } from "@cap/web-backend";
import { ClusterWorkflowEngine, RunnerAddress } from "@effect/cluster";
import * as NodeSdk from "@effect/opentelemetry/NodeSdk";
import {
FetchHttpClient,
HttpApiBuilder,
HttpMiddleware,
HttpRouter,
HttpServer,
} from "@effect/platform";
import {
NodeClusterRunnerSocket,
NodeHttpServer,
NodeRuntime,
} from "@effect/platform-node";
import { RpcServer } from "@effect/rpc";
import { WorkflowProxyServer } from "@effect/workflow";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { Config, Effect, Layer, Option } from "effect";

import { ContainerMetadata } from "../cluster/container-metadata.ts";
import { DatabaseLive, ShardDatabaseLive } from "../shared/database.ts";
import { HealthServerLive } from "./health-server.ts";

const ClusterWorkflowLive = Layer.unwrapEffect(
Effect.gen(function* () {
const containerMeta = yield* ContainerMetadata;
return ClusterWorkflowEngine.layer.pipe(
Layer.provide(
NodeClusterRunnerSocket.layer({
storage: "sql",
shardingConfig: {
runnerAddress: Option.some(
RunnerAddress.make(containerMeta.ipAddress, containerMeta.port),
),
},
}),
),
Layer.provide(ShardDatabaseLive),
);
}),
).pipe(Layer.provide(ContainerMetadata.Default));

const RpcsLive = RpcServer.layer(Workflows.RpcGroup).pipe(
Layer.provide(WorkflowProxyServer.layerRpcHandlers(Workflows.Workflows)),
Layer.provide(Workflows.WorkflowsLayer),
Layer.provide(ClusterWorkflowLive),
Layer.provide(RpcServer.layerProtocolHttp({ path: "/" })),
Layer.provide(Workflows.RpcSerialization),
);

const TracingLayer = Layer.unwrapEffect(
Effect.gen(function* () {
const exporter = Option.match(
yield* Config.option(
Config.all([
Config.string("AXIOM_API_TOKEN"),
Config.string("AXIOM_DOMAIN"),
Config.string("AXIOM_DATASET"),
]),
),
{
onNone: () => new OTLPTraceExporter({}),
onSome: ([token, domain, dataset]) => {
return new OTLPTraceExporter({
url: `https://${domain}/v1/traces`, // Axiom API endpoint for trace data
headers: {
Authorization: `Bearer ${token}`, // Replace API_TOKEN with your actual API token
"X-Axiom-Dataset": dataset, // Replace DATASET_NAME with your dataset
},
});
},
},
);

return NodeSdk.layer(() => ({
resource: { serviceName: "cap-workflow-runner" },
spanProcessor: [new BatchSpanProcessor(exporter)],
}));
}),
);

HttpRouter.Default.serve().pipe(
Layer.provide(RpcsLive),
HttpServer.withLogAddress,
Layer.provide(NodeHttpServer.layer(createServer, { port: 42169 })),
Layer.provide(Videos.Default),
Layer.provide(S3Buckets.Default),
Layer.provide(Database.Default),
Layer.provide(FetchHttpClient.layer),
Layer.provide(DatabaseLive),
Layer.provide(TracingLayer),
Comment on lines +88 to +94
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Align the runner’s advertised port with the listener.

RunnerAddress.make falls back to the PORT config (default 42069), but the HTTP server is hard-coded to listen on 42169. In environments where PORT isn’t set, the cluster advertises 42069 while the process listens on 42169, so peers can never connect. Even when PORT is set, we’re still deriving the address and the listener from different sources. Please derive the NodeHttpServer port from the same configuration (e.g., the ContainerMetadata value) so the runner always advertises the port it actually binds to.

🤖 Prompt for AI Agents
In apps/web-cluster/src/runner/index.ts around lines 88 to 94, the HTTP server
is hard-coded to port 42169 while RunnerAddress.make falls back to PORT (default
42069), causing the advertised address to mismatch the actual listener; change
the NodeHttpServer.layer invocation to derive its port from the same
configuration source used by RunnerAddress.make (e.g., ContainerMetadata or the
central Config/Environment value) so both the listener and the advertised
RunnerAddress use the identical port value; pass that configured port into
NodeHttpServer.layer(createServer, { port: <configuredPort> }) (or wire
ContainerMetadata into the layer providing the server) so the process advertises
the exact port it binds to.

Layer.launch,
NodeRuntime.runMain,
);

HealthServerLive.pipe(Layer.launch, NodeRuntime.runMain);
17 changes: 17 additions & 0 deletions apps/web-cluster/src/shard-manager.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import {
NodeClusterShardManagerSocket,
NodeRuntime,
} from "@effect/platform-node";
import { Layer, Logger } from "effect";

import { DatabaseLive, ShardDatabaseLive } from "./shared/database.ts";

NodeClusterShardManagerSocket.layer({
storage: "sql",
}).pipe(
Layer.provide(ShardDatabaseLive),
Layer.provide(DatabaseLive),
Layer.provide(Logger.pretty),
Layer.launch,
NodeRuntime.runMain,
);
29 changes: 29 additions & 0 deletions apps/web-cluster/src/shared/database.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import { MysqlClient } from "@effect/sql-mysql2";
import { Config, Effect, Layer, Option } from "effect";

export const DatabaseLive = Layer.unwrapEffect(
Effect.gen(function* () {
const url = yield* Config.redacted(Config.string("DATABASE_URL"));

return MysqlClient.layer({ url });
}),
);

export const ShardDatabaseLive = Layer.unwrapEffect(
Effect.gen(function* () {
const url = yield* Config.option(
Config.redacted(Config.string("SHARD_DATABASE_URL")),
);

return yield* Option.match(url, {
onNone: () =>
Effect.gen(function* () {
return Layer.succeed(
MysqlClient.MysqlClient,
yield* MysqlClient.MysqlClient,
);
}),
onSome: (url) => Effect.succeed(MysqlClient.layer({ url })),
Comment on lines +18 to +26
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

ShardDatabaseLive blows up when SHARD_DATABASE_URL is unset.

Inside the onNone branch you’re calling yield* MysqlClient.MysqlClient while constructing the layer. At that point there is no service in scope, so the effect fails at runtime with “Missing service: MysqlClient”. We just need to re-expose the already-provided client instead of trying to pull it from the environment.

A minimal fix:

-      onNone: () =>
-        Effect.gen(function* () {
-          return Layer.succeed(
-            MysqlClient.MysqlClient,
-            yield* MysqlClient.MysqlClient,
-          );
-        }),
+      onNone: () => Effect.succeed(Layer.context<MysqlClient.MysqlClient>()),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return yield* Option.match(url, {
onNone: () =>
Effect.gen(function* () {
return Layer.succeed(
MysqlClient.MysqlClient,
yield* MysqlClient.MysqlClient,
);
}),
onSome: (url) => Effect.succeed(MysqlClient.layer({ url })),
return yield* Option.match(url, {
onNone: () => Effect.succeed(Layer.context<MysqlClient.MysqlClient>()),
onSome: (url) => Effect.succeed(MysqlClient.layer({ url })),
})

});
Comment on lines +20 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix cyclic dependency when SHARD_DATABASE_URL is missing

onNone currently tries to read MysqlClient while providing it, creating a layer cycle. Use Layer.identity to pass through the existing client instead of fetching it during construction.

-      onNone: () =>
-        Effect.gen(function* () {
-          return Layer.succeed(
-            MysqlClient.MysqlClient,
-            yield* MysqlClient.MysqlClient,
-          );
-        }),
+      onNone: () =>
+        Effect.succeed(Layer.identity(MysqlClient.MysqlClient)),
🤖 Prompt for AI Agents
In apps/web-cluster/src/shared/database.ts around lines 20 to 27, the onNone
branch creates a cyclic layer by trying to read/provide MysqlClient while
constructing it; replace the Effect.gen/Layer.succeed pattern with
Layer.identity(MysqlClient.MysqlClient) so the existing MysqlClient layer is
passed through instead of being re-read during construction, preventing the
cycle.

}),
);
14 changes: 14 additions & 0 deletions apps/web-cluster/tsconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"extends": "../../packages/config/base.tsconfig.json",
"exclude": ["node_modules", "dist"],
"include": ["**/*.ts", "**/*.tsx"],
"compilerOptions": {
"moduleResolution": "bundler",
"composite": true,
"outDir": "dist",
"noEmit": false,
"emitDeclarationOnly": true,
"module": "es2020",
"target": "es2020"
}
Comment on lines +5 to +13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify build pipeline with emitDeclarationOnly: true.

The configuration sets emitDeclarationOnly: true (line 10), meaning TypeScript will only emit .d.ts declaration files and no JavaScript output. This suggests that another build tool (likely tsdown based on the package dependencies) handles the actual JavaScript compilation.

Confirm that:

  • The build scripts in package.json properly invoke the JavaScript build tool (tsdown/tsup)
  • The dist directory structure matches expectations for both declarations and runtime code
  • Development workflow (dev script) doesn't rely on TypeScript's emitted JS

🏁 Script executed:

#!/bin/bash
# Check build scripts and tsdown configuration
cat apps/web-cluster/package.json | jq '.scripts.build, .scripts.dev'

# Look for tsdown config
fd -e ts 'tsdown.config' apps/web-cluster

# Check if dist directory is properly configured
cat apps/web-cluster/package.json | jq '.main, .types, .exports'

Length of output: 216


Fix TS build pipeline and package.json declarations

  • Update the build script to invoke tsc --build tsconfig.json so emitDeclarationOnly: true actually emits .d.ts into dist.
  • Add main, types, and exports fields in apps/web-cluster/package.json pointing to the compiled outputs.

}
13 changes: 13 additions & 0 deletions apps/web/app/api/[[...route]]/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import { HttpLive } from "@cap/web-backend";
import { HttpApiScalar } from "@effect/platform";
import { Layer } from "effect";
import { apiToHandler } from "@/lib/server";

const handler = apiToHandler(
HttpApiScalar.layer({ path: "/api" }).pipe(Layer.provideMerge(HttpLive)),
);

export const GET = handler;
export const POST = handler;
export const HEAD = handler;
export const OPTIONS = handler;
1 change: 0 additions & 1 deletion apps/web/app/layout.tsx
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import "@/app/globals.css";
import { getCurrentUser } from "@cap/database/auth/session";
import { buildEnv } from "@cap/env";
import { S3_BUCKET_URL } from "@cap/utils";
import { Analytics as DubAnalytics } from "@dub/analytics/react";
import * as TooltipPrimitive from "@radix-ui/react-tooltip";
import type { Metadata } from "next";
Expand Down
Loading
Loading