Conversation
… improved performance and timeout handling
📝 WalkthroughWalkthroughAdds containerization and Kubernetes deployment, a production Express server with Gmail OAuth and ZK proof orchestration, input generation and caching for Noir circuits, build/prove automation scripts, testing config, and extensive README and deployment artifacts. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Server as Server (Express)
participant Gmail as Gmail API
participant Circuit as Circuit Storage/Compiler
participant Prover as Nargo/BB (Prover)
User->>Server: GET /gmail/auth
Server-->>User: Redirect to Gmail consent
User->>Gmail: Consent & authenticate
Gmail-->>Server: OAuth callback (code)
Server->>Gmail: Exchange code for tokens
Gmail-->>Server: Access token
Server-->>User: callbackTemplate (auth complete)
User->>Server: POST /gmail/fetch-email (query + token)
Server->>Gmail: Fetch email by query
Gmail-->>Server: Raw email
Server->>Circuit: Download/unzip circuit
Server->>Circuit: Compile (nargo)
Server->>Server: InputsGenerator -> build inputs
Server->>Prover: nargo execute (witness)
Server->>Prover: bb prove (binary)
Server->>Prover: bb prove (fields)
Prover-->>Server: Proof + public inputs
Server-->>User: Proof result + redirect to claim
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (12)
.dockerignore (1)
42-45: Optional: Docker self-exclusion is redundant.Excluding Dockerfile, docker-compose files, and .dockerignore itself is redundant since the Dockerfile only copies specific files (package.json, tsconfig.json, src/, .cache/). This doesn't impact the build but could be removed for simplicity.
compile.sh (1)
24-33: Consider adding command existence checks.The version checks assume
nargoandbbare installed. If they're not, the script will attempt to runnoirup/bbup, which might also fail. Consider adding explicit checks for command existence before version checks.Apply this diff to add existence checks:
log "Step 0: Checking Noir and BB versions" + +if ! command -v nargo &> /dev/null; then + log "nargo not found, installing version $NARGO_VERSION" + noirup --version $NARGO_VERSION >> "$LOG_FILE" 2>&1 +elif [ "$(nargo --version | grep "nargo version = $NARGO_VERSION")" != "nargo version = $NARGO_VERSION" ]; then -if [ "$(nargo --version | grep "nargo version = $NARGO_VERSION")" != "nargo version = $NARGO_VERSION" ]; then log "Noir version is not $NARGO_VERSION, running noirup --version $NARGO_VERSION" noirup --version $NARGO_VERSION >> "$LOG_FILE" 2>&1 fi +if ! command -v bb &> /dev/null; then + log "bb not found, installing version $BB_VERSION" + bbup --version $BB_VERSION >> "$LOG_FILE" 2>&1 +elif [ "$(bb --version)" != "v$BB_VERSION" ]; then -if [ "$(bb --version)" != "v$BB_VERSION" ]; then log "BB version is not $BB_VERSION, running bbup --version $BB_VERSION" bbup --version $BB_VERSION >> "$LOG_FILE" 2>&1 fiDockerfile (1)
69-69: Document the high memory requirement.The 16GB heap size allocation is appropriate for ZK proof generation but should be documented in the README or deployment docs as a minimum system requirement. Ensure host systems have sufficient RAM (likely 20-24GB total to account for OS overhead).
k8s/00-backend-secrets.template.yaml (1)
1-34: Secret template looks good; consider a small guard against accidental applyNice use of a
.templatewithstringDataand clear docs. To further reduce operator error, you might add a loud comment near Line 32 reminding not to apply this file with placeholder values (or require a non-placeholder namespace/name suffix like-example) so misconfigured secrets fail obviously at startup instead of during Gmail auth.docker-compose.yml (1)
13-21:deploy.resourceswon’t limit resources in plaindocker-compose upThe
deploy.resourcesblock is only honored in Swarm mode, so for normaldocker-compose upthese CPU/memory settings are effectively no-ops. If the intent is local resource limiting, consider documenting that this stack is Swarm-oriented or adding compose-level constraints (e.g.,mem_limit,cpus) or a short note in the README so expectations match behavior.src/callbackTemplate.html (1)
246-253: Consider making the claim-page redirect URL configurable
window.location.href = 'https://pay.zk.email/claim?proofId=' + proofId;hard-codes the production domain, which makes staging/self-hosting or changing the frontend origin harder. You might inject this as a template variable or derive it from a data-attribute/query param so environments can override the base URL without editing this file.src/server.ts (6)
156-183:downloadFileshould handle non-200 responses and cleanup more defensivelyRight now
downloadFiletreats any non-redirect response as success and just pipes it to disk; a 4xx/5xx HTML error page would be “downloaded” and only fail later at unzip/compile. Also, on the.on("error")path you callfs.unlinkSync(destPath)unguarded, which can throw if the file was never created/opened.Suggested robustness tweaks:
- Check
response.statusCodeandrejecton non-2xx (after following redirects).- Wrap
unlinkSyncin a try/catch or usefs.rm(destPath, { force: true }, ...)so cleanup doesn’t mask the original error.Not critical, but it will produce clearer failures when something is wrong with the circuit download.
185-239: ConcurrentprepareCircuitcalls for the same blueprint can race
prepareCircuituses a.compiledmarker to skip work, but there’s no synchronization when multiple requests for the sameblueprintIdarrive while the circuit is still being downloaded/compiled. Two (or more) processes could try todownloadFile,unzip, andnargo compileinto the same directory concurrently, with undefined results.If you expect concurrent traffic per blueprint, consider:
- A per-blueprint mutex (in-process) and/or
- A simple file lock/“compiling” marker you check and wait/poll on before starting another compilation.
This is more of a robustness/throughput improvement than a correctness bug in low-concurrency scenarios.
241-341: Proof pipeline is solid; just note the reliance on external CLIs and long-running sync filesystem workThe proof flow (prepare circuit → copy to working dir → generate inputs → write Prover.toml →
nargo execute+bb prove→ parse JSON → cleanup) is coherent and well-logged. A couple of operational notes:
- The whole path relies on
nargo,unzip, andbbbeing available inPATHinside the container; a brief README/Dockerfile comment calling that out will save debugging time if someone runs this outside your image.- You’re using
fs.writeFileSync/fs.rmSyncin the hot path; for single-process workloads that’s acceptable, but if this ever serves many concurrent proofs you may want to move to async fs to avoid blocking the event loop on large I/O bursts.Functionally it looks correct; these are mostly maintainability/operational considerations.
429-453: Top-level Gmail OAuth client assumes env vars are set; consider early hard-fail for misconfig
oauth2Clientis initialized at module load withGMAIL_CLIENT_ID,GMAIL_CLIENT_SECRET, andGMAIL_REDIRECT_URI, even if the first two are missing (you only log a warning earlier). In that misconfigured state,/gmail/authwill still generate an auth URL and the flow will fail later in confusing ways.A small hardening would be:
- If either client ID or secret is missing, have
/gmail/authimmediately return a 500 with a clear “server misconfigured” message instead of redirecting, or don’t register the Gmail routes at all unless config is valid.This keeps operator mistakes obvious and avoids user-facing OAuth errors that are hard to decode.
588-672:/gmail/fetch-emailand/proof/:idbehavior aligns, but amplify docs around what’s exposedThe
POST /gmail/fetch-emailendpoint and the stored result both expose:
- Full raw email content,
- Proof fields, and
- Optional handle,
then
/proof/:idlater returns that entire object for any caller with the UUID. Functionally this is consistent and works, but it’s worth clearly documenting:
- That clients should treat
proofIdas a secret bearer token (a capability URL).- That the server is intentionally returning raw email, and what the retention policy is (or is not).
No code change strictly required, but calling this out in API docs/README will help integrators handle these artifacts safely.
1-16: Top-levelawait import(...)polyfills assume ESM; double-check your build/Node targetsThe Web Worker and IndexedDB polyfills use top-level
await import("web-worker")/await import("fake-indexeddb"). That’s fine if this is compiled/executed as an ES module on a recent Node, but it will break in CommonJS or older environments.Just make sure:
tsconfigtargets an ESM output where top-level await is supported, and- The runtime (Node in your Docker image) is configured for ESM (e.g.,
"type": "module"or.mjsentry).If you ever need CJS compatibility, you’d have to move these into an async init or lazy-load them where first used.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (40)
.cache/blueprints/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c.jsonis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/.compiledis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/Nargo.tomlis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/Prover.tomlis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/compile.logis excluded by!**/.cache/**,!**/*.log.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/compile.shis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/prove.logis excluded by!**/.cache/**,!**/*.log.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/prove.shis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/src/handle_regex.nris excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/src/main.nris excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/src/sender_domain_regex.nris excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/DiscordHonkVerifier.solis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/circuit.gzis excluded by!**/.cache/**,!**/*.gz.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/circuit.jsonis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/proofis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/proof_fields.jsonis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/public_inputsis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/public_inputs_fields.jsonis excluded by!**/.cache/**.cache/circuits/32b10ec7-88ff-4a23-9e3d-49fd0fe2874c/target/vkis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/Nargo.tomlis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/Prover.tomlis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/compile.logis excluded by!**/.cache/**,!**/*.log.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/compile.shis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/prove.logis excluded by!**/.cache/**,!**/*.log.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/prove.shis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/src/handle_regex.nris excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/src/main.nris excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/src/sender_domain_regex.nris excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/DiscordHonkVerifier.solis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/circuit.gzis excluded by!**/.cache/**,!**/*.gz.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/circuit.jsonis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/proofis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/proof_fields.jsonis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/public_inputsis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/public_inputs_fields.jsonis excluded by!**/.cache/**.cache/circuits/working-eadb5fb3-df02-47f5-b3b4-7cc08a43f98a/target/vkis excluded by!**/.cache/**.cache/proofs/7631abf3-b0f2-472c-bca2-3e8b7083dddd.jsonis excluded by!**/.cache/**.cache/proofs/e3e3b919-9705-4c92-b1c4-5dac534ba603.jsonis excluded by!**/.cache/**bun.lockis excluded by!**/*.lockpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (19)
.dockerignore(1 hunks).gitignore(1 hunks)Dockerfile(1 hunks)README.md(1 hunks)compile.sh(1 hunks)docker-compose.yml(1 hunks)index.ts(0 hunks)jest.config.ts(1 hunks)k8s/00-backend-secrets.template.yaml(1 hunks)k8s/00-namespace.yaml(1 hunks)k8s/01-backend.yaml(1 hunks)k8s/02-cert.yaml(1 hunks)k8s/03-ingress.yaml(1 hunks)k8s/04-backend-config.yaml(1 hunks)package.json(2 hunks)prove.sh(1 hunks)src/InputsGenerator.ts(1 hunks)src/callbackTemplate.html(1 hunks)src/server.ts(1 hunks)
💤 Files with no reviewable changes (1)
- index.ts
🧰 Additional context used
🧬 Code graph analysis (3)
compile.sh (1)
prove.sh (1)
log(16-18)
src/server.ts (1)
src/InputsGenerator.ts (1)
InputsGenerator(33-194)
prove.sh (1)
compile.sh (1)
log(18-20)
🪛 Checkov (3.2.334)
k8s/01-backend.yaml
[medium] 1-81: Minimize the admission of root containers
(CKV_K8S_23)
🪛 markdownlint-cli2 (0.18.1)
README.md
26-26: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
74-74: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
97-97: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
203-203: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (7)
.gitignore (1)
37-39: LGTM! Good security practice.The ignore pattern for Kubernetes secrets with explicit unignore for template files follows security best practices by preventing accidental commits of sensitive credentials while preserving template documentation.
compile.sh (1)
35-54: LGTM! Compilation workflow is well-structured.The three-step compilation process (compile circuit → write VK → write Solidity verifier) with proper error handling and logging follows best practices for Noir circuit compilation.
jest.config.ts (1)
1-10: LGTM! Test configuration is appropriate for ZK proof workflows.The 10-minute test timeout is appropriate given that ZK proof generation can take several minutes, as mentioned in the README documentation.
prove.sh (2)
22-78: LGTM! Proof generation workflow is well-structured.The four-step proof generation process with proper error handling, temporary directory management, and comprehensive logging follows best practices. The cleanup of temporary files ensures no artifacts are left behind.
1-1: Fix shebang format.Same issue as in compile.sh: the shebang has a space after
#!. Standard format is#!/bin/bashwithout a space.Apply this diff:
-#!/bin/bash +#!/bin/bashLikely an incorrect or invalid review comment.
package.json (1)
16-32: All packages verified successfully; no security issues detected.The npm registry confirms that all specified dependencies exist and are installable, including the beta and nightly versions. The security audit found no vulnerabilities. The dependencies in
package.json(lines 16-32) are valid.Dockerfile (1)
63-63: No action needed — .cache/ directory is intentionally required at runtime.The
.cache/directory contains precompiled Noir ZK circuits that are explicitly loaded by the application. Evidence confirms this is intentional:
- Referenced directly in
src/server.ts(lines 35, 38) for circuit and proof storage- Intentionally tracked in git and not excluded from
.dockerignore- Application depends on
@noir-lang/noir_jsand@zk-email/sdkfor ZK operations- Precompiling circuits avoids expensive runtime compilation
The 106MB size is a deliberate trade-off for performance. No changes required.
| # Install dependencies | ||
| RUN npm install | ||
|
|
||
| # Copy source code | ||
| COPY src ./src | ||
|
|
||
| # Install production dependencies + tsx for running TypeScript | ||
| RUN npm install --omit=dev && \ | ||
| npm install tsx && \ | ||
| npm cache clean --force |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Refactor npm install sequence for efficiency.
Dependencies are installed twice: first all dependencies (line 53), then production-only dependencies (line 59). This doubles install time and creates unnecessary layers.
Apply this diff to optimize:
-# Install dependencies
-RUN npm install
-
# Copy source code
COPY src ./src
# Install production dependencies + tsx for running TypeScript
-RUN npm install --omit=dev && \
- npm install tsx && \
+RUN npm ci --omit=dev && \
+ npm install --no-save tsx && \
npm cache clean --forceNote: Using npm ci is faster and more reliable for production builds. If tsx is needed at runtime, consider adding it to production dependencies in package.json instead of installing it separately.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Install dependencies | |
| RUN npm install | |
| # Copy source code | |
| COPY src ./src | |
| # Install production dependencies + tsx for running TypeScript | |
| RUN npm install --omit=dev && \ | |
| npm install tsx && \ | |
| npm cache clean --force | |
| # Copy source code | |
| COPY src ./src | |
| # Install production dependencies + tsx for running TypeScript | |
| RUN npm ci --omit=dev && \ | |
| npm install --no-save tsx && \ | |
| npm cache clean --force |
| # Security context - running as root for file permissions | ||
| securityContext: | ||
| runAsNonRoot: false | ||
| runAsUser: 0 | ||
| runAsGroup: 0 | ||
| allowPrivilegeEscalation: false | ||
| readOnlyRootFilesystem: false | ||
| capabilities: | ||
| drop: | ||
| - ALL |
There was a problem hiding this comment.
🧩 Analysis chain
Running container as root is a notable hardening gap
The pod securityContext explicitly runs the container as root (runAsNonRoot: false, runAsUser: 0), which is what Checkov’s CKV_K8S_23 is flagging. Even with allowPrivilegeEscalation: false and all capabilities dropped, a root user inside the container increases blast radius if an RCE or breakout is found.
If possible, consider:
- Creating a dedicated non-root user at image build time (e.g., UID 1000) and
- Fixing file permissions/ownership in the Dockerfile so the process can run with
runAsNonRoot: true,runAsUser: 1000,runAsGroup: 1000.
That would align better with common pod security baselines while keeping your current probes and resource settings unchanged.
I'll verify the security concern and Checkov rule details.Now I need to examine the actual Kubernetes manifest to understand why root is needed and assess the context.
<!-- [scratchpad_update]
[task verify_root_security_concern/3]
STATUS: In progress - web search confirms CKV_K8S_23 is about minimizing root containers
Need to examine the actual manifest file
[pending]
- Retrieve full manifest content
- Check init containers, volume mounts, file permission requirements
- Determine if root is truly necessary for this backend application
[scratchpad_update] -->
---
🏁 Script executed:
```shell
#!/bin/bash
# Inspect k8s/01-backend.yaml for context around root requirement
cat k8s/01-backend.yaml
Length of output: 2594
Let me check if there's a local Dockerfile or any build configuration that explains the root requirement:
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for Dockerfile related to noir-prover
fd -i dockerfile | head -20
Length of output: 10
Now let me examine the Dockerfile:
---
🏁 Script executed:
```shell
#!/bin/bash
# Examine the Dockerfile for root user setup and file operations
cat Dockerfile
Length of output: 1710
Running container as root requires Dockerfile refactoring plus manifest changes
The concern is valid: the container runs as root (UID 0 in k8s manifest, no USER directive in Dockerfile). While your k8s securityContext already hardens this with dropped capabilities and disabled privilege escalation, moving to a non-root user aligns with CKV_K8S_23 and reduces blast radius on container escape.
To fix this:
-
Dockerfile: Add a non-root user and ensure
/appand dependency paths are accessible:- Add
RUN useradd -u 1000 -m noir-appbeforeWORKDIR /app - Change
WORKDIR /appownership or useUSER noir-app - Verify tool paths (Noir, Barretenberg, Rust in
/root/) are readable by the new user or moved to shared locations
- Add
-
k8s manifest (lines 71–80): Change
runAsNonRoot: false→trueandrunAsUser: 0→1000once the image is updated.
This requires rebuilding the image; the current recommendation is feasible for a Node.js application on a non-privileged port.
🤖 Prompt for AI Agents
In k8s/01-backend.yaml around lines 71–80 the pod securityContext currently runs
the container as root (runAsNonRoot: false, runAsUser: 0); update the image and
manifest so the container runs as a non-root user: modify your Dockerfile to
create a non-root user (e.g., useradd -u 1000 noir-app), ensure WORKDIR /app and
any dependency/tool paths (Noir, Barretenberg, Rust artifacts) are owned or
readable by that user (chown/chmod as needed), switch to USER noir-app before
final image, rebuild the image, and then change the manifest to runAsNonRoot:
true and runAsUser: 1000 (confirm the non-root user can access required binaries
and ports and adjust file locations or permissions if necessary).
| annotations: | ||
| kubernetes.io/ingress.global-static-ip-name: noir-prover-ip | ||
| networking.gke.io/managed-certificates: noir-prover-cert | ||
| ingressClassName: "gce" | ||
| # Increase timeout for long-running ZK proof operations (3600 seconds = 1 hour) | ||
| cloud.google.com/backend-config: "noir-prover-backend-config" | ||
| spec: |
There was a problem hiding this comment.
Correct ingress class & backend-config annotations.
ingressClassName must live under spec, not under metadata.annotations, otherwise it’s ignored. Also, GKE expects cloud.google.com/backend-config to contain a JSON mapping (for example {"default": "noir-prover-backend-config"}); providing just the bare name means the controller never associates the BackendConfig, so your 2‑hour timeout will never take effect. Please move ingressClassName into spec and wrap the BackendConfig reference in the required JSON structure. (cloud.google.com)
🤖 Prompt for AI Agents
In k8s/03-ingress.yaml around lines 6 to 12, the ingressClassName is incorrectly
placed in metadata.annotations and the cloud.google.com/backend-config
annotation uses a bare name instead of the required JSON mapping; move
ingressClassName under spec (e.g., spec.ingressClassName: "gce") and change the
backend-config annotation to a JSON object mapping the default backend to your
BackendConfig (e.g., {"default":"noir-prover-backend-config"}) so GKE will
recognize and apply the BackendConfig and the ingress class.
| { | ||
| "name": "noirprover", | ||
| "module": "index.ts", | ||
| "module": "src", |
There was a problem hiding this comment.
🧩 Analysis chain
Verify the module field points to a valid entry.
The module field changed from "index.ts" to "src". Typically, this field should point to a specific entry file (e.g., "src/index.ts" or "src/server.ts"), not a directory. Verify this works correctly with Bun's module resolution.
I'll verify the review comment by checking the actual package.json file and Bun's module field documentation.
---
🏁 Script executed:
```shell
#!/bin/bash
# Read package.json to verify module field
fd -e package.json | head -1 | xargs cat
Length of output: 0
🌐 Web query:
Bun package.json module field directory entry point
💡 Result:
Short answer: Bun resolves a package's entrypoint in this order — exports → module → main. If the chosen field points at a directory, standard package-directory resolution applies (look for package.json in that directory for its entry, otherwise fall back to an index file like index.js / index.ts). Sources: Bun docs (module-resolution and bun init) and Node package-resolution docs. [1][2][3]
References
- Bun — Module resolution: exports → module → main, and the special "bun" condition. [1]
- bun init: bun uses package.json module/main to pick a project entrypoint. [2]
- Node.js packages (package.json entry/ directory/index resolution behavior). [3]
The script didn't return output. Let me try a different approach to locate and verify the package.json file:
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for package.json in common locations
ls -la package.json 2>/dev/null || echo "Not in root"
# Also try finding any package.json
find . -maxdepth 2 -name "package.json" -type f -exec echo "Found: {}" \; -exec cat {} \;
Length of output: 888
Now let me check what's in the src directory to determine if the module field can be properly resolved:
---
🏁 Script executed:
```shell
#!/bin/bash
# Check src directory structure and entry point files
echo "=== Contents of src directory ==="
find src -maxdepth 2 -type f \( -name "*.ts" -o -name "*.js" -o -name "package.json" \) | head -20
echo ""
echo "=== Check for index files in src ==="
ls -la src/index.ts src/index.js 2>/dev/null || echo "No index.ts or index.js found in src/"
Length of output: 146
Module field "src" cannot be resolved — requires index file or direct file path.
The module field points to the src directory, but Bun's module resolution requires either:
- An
src/index.tsorsrc/index.jsfile in that directory, OR - A direct file path like
"src/server.ts"
The src directory currently contains only InputsGenerator.ts and server.ts with no index file. This will cause module resolution to fail. Update the module field to point to the actual entry file (e.g., "src/server.ts"), or add an src/index.ts that exports the appropriate entry point.
🤖 Prompt for AI Agents
In package.json around line 3, the "module" field points to the "src" directory
which Bun cannot resolve because there is no index file; update "module" to
reference the actual entry file (for example "src/server.ts") or create an
"src/index.ts" that re-exports the chosen entry (server.ts or
InputsGenerator.ts) so Bun can resolve the module path.
|
|
||
| make sure you have docker installed | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Add language specifiers to code blocks.
Four code blocks are missing language specifiers, which affects syntax highlighting and accessibility. As per coding guidelines (markdownlint).
Apply these changes:
- Line 26: Add
bashto the code block - Line 74: Add
jsonto the code block - Line 97: Add
httportextto the code block - Line 203: Add
httportextto the code block
Example for line 26:
-```
+```bash
docker-compose up --build
Also applies to: 74-74, 97-97, 203-203
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>
26-26: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In README.md around lines 26, 74, 97 and 203, several fenced code blocks are
missing language specifiers; update each opening triple-backtick to include the
appropriate language tag: change the block at line 26 to bash, the block at line 74 to json, and the blocks at lines 97 and 203 to either ```http or
Ensure you only modify the opening fence to add the specifier and leave the code
content unchanged.
| parts: dr.parts.map((p) => ({ | ||
| // @ts-ignore | ||
| is_public: p.isPublic || !!p.is_public, | ||
| // @ts-ignore | ||
| regex_def: p.regexDef || !!p.regex_def, | ||
| // @ts-ignore | ||
| ...(p.isPublic && { maxLength: p.maxLength || !!p.max_length }), | ||
| })), |
There was a problem hiding this comment.
Fix incorrect part field mapping.
regex_def and maxLength are being coerced to booleans because of the || !! fallbacks, and the maxLength spread only triggers when p.isPublic is true (ignoring p.is_public). This produces true/false instead of the expected string/number payload and silently drops max_length when the blueprint uses snake_case, breaking downstream circuit generation. Replace the coercion with nullish fallbacks and reuse the computed isPublic flag when deciding whether to emit maxLength.
Apply this diff to address the problem:
- parts: dr.parts.map((p) => ({
- // @ts-ignore
- is_public: p.isPublic || !!p.is_public,
- // @ts-ignore
- regex_def: p.regexDef || !!p.regex_def,
- // @ts-ignore
- ...(p.isPublic && { maxLength: p.maxLength || !!p.max_length }),
- })),
+ parts: dr.parts.map((p) => {
+ const isPublic = p.isPublic ?? p.is_public ?? false;
+ const maxLength = p.maxLength ?? p.max_length;
+ return {
+ // @ts-ignore
+ is_public: isPublic,
+ // @ts-ignore
+ regex_def: p.regexDef ?? p.regex_def,
+ // @ts-ignore
+ ...(isPublic && maxLength !== undefined ? { maxLength } : {}),
+ };
+ }),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| parts: dr.parts.map((p) => ({ | |
| // @ts-ignore | |
| is_public: p.isPublic || !!p.is_public, | |
| // @ts-ignore | |
| regex_def: p.regexDef || !!p.regex_def, | |
| // @ts-ignore | |
| ...(p.isPublic && { maxLength: p.maxLength || !!p.max_length }), | |
| })), | |
| parts: dr.parts.map((p) => { | |
| const isPublic = p.isPublic ?? p.is_public ?? false; | |
| const maxLength = p.maxLength ?? p.max_length; | |
| return { | |
| // @ts-ignore | |
| is_public: isPublic, | |
| // @ts-ignore | |
| regex_def: p.regexDef ?? p.regex_def, | |
| // @ts-ignore | |
| ...(isPublic && maxLength !== undefined ? { maxLength } : {}), | |
| }; | |
| }), |
🤖 Prompt for AI Agents
In src/InputsGenerator.ts around lines 132 to 139, the mapping coerces regex_def
and maxLength to booleans and only checks p.isPublic (ignoring snake_case) when
emitting maxLength; change to compute isPublic = p.isPublic ?? p.is_public, set
is_public: isPublic, set regex_def: p.regexDef ?? p.regex_def (no boolean
coercion), compute maxLengthValue = p.maxLength ?? p.max_length and emit
...(isPublic && { maxLength: maxLengthValue }) so string/number values and
snake_case fields are preserved.
| // Enable CORS for all origins | ||
| app.use((req, res, next) => { | ||
| res.setHeader("Access-Control-Allow-Origin", "*"); | ||
| res.setHeader( | ||
| "Access-Control-Allow-Methods", | ||
| "GET, POST, PUT, DELETE, OPTIONS" | ||
| ); | ||
| res.setHeader("Access-Control-Allow-Headers", "Content-Type, Authorization"); | ||
| res.setHeader("Access-Control-Max-Age", "86400"); // 24 hours | ||
|
|
||
| // Handle preflight requests | ||
| if (req.method === "OPTIONS") { | ||
| return res.status(204).end(); | ||
| } | ||
|
|
||
| next(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
CORS wildcard exposes all endpoints to any origin
The CORS middleware sets Access-Control-Allow-Origin: * and allows Authorization headers for every route. Given these endpoints can:
- Trigger Gmail-backed proof generation, and
- Return full raw emails and proofs,
you may want to restrict allowed origins to your trusted frontends (e.g., https://pay.zk.email) instead of *, or make it configurable via env. That will at least prevent arbitrary third-party sites from calling your APIs directly from a browser context.
🏁 Script executed:
#!/bin/bash
# Find and examine server.ts file
fd -t f "server.ts" -x cat {}Length of output: 20070
🏁 Script executed:
#!/bin/bash
# Search for CORS-related configuration and environment variables
rg -n "CORS|cors|Origin|ALLOW_ORIGIN" --type ts --type js -B2 -A2Length of output: 328
🏁 Script executed:
#!/bin/bash
# Look for endpoint definitions and their auth mechanisms
rg -n "app\.(post|get|put|delete|use)" --type ts -B1 -A3 | head -100Length of output: 2147
🏁 Script executed:
#!/bin/bash
# Search for authorization/authentication middleware
rg -n "auth|middleware|verify|token|check" --type ts --type js -B1 -A2 | head -80Length of output: 2711
🏁 Script executed:
#!/bin/bash
# Look for Gmail and proof-related endpoints
rg -n "gmail|proof|email" --type ts --type js -i -B2 -A5 | head -100Length of output: 5309
Restrict CORS to trusted origins and add authentication to sensitive endpoints
The wildcard CORS configuration is confirmed to be hardcoded with no environment-based override. The endpoints that handle and return sensitive data—particularly GET /proof/:id, which exposes full raw emails and generated proofs—have no per-endpoint authentication. Combined with unrestricted origins, this allows any third-party website to enumerate proof IDs and retrieve user emails and cryptographic artifacts.
Recommended fixes:
- Replace
Access-Control-Allow-Origin: *with an env-configured allowlist (e.g.,CORS_ALLOWED_ORIGINS) - Add authentication middleware to
/proof/:id,/gmail/fetch-email, and/proveendpoints (e.g., bearer token validation or session checks) - Ensure proof IDs cannot be trivially enumerated (current UUID approach is better than sequential, but auth is still needed)
| // Save proof result to file | ||
| function saveProofResult( | ||
| proofId: string, | ||
| result: { | ||
| email: { id: string; raw: string }; | ||
| proof: string[]; | ||
| publicInputs: string[]; | ||
| handle?: string; | ||
| } | ||
| ) { | ||
| ensureProofsDir(); | ||
| const filePath = path.join(PROOFS_DIR, `${proofId}.json`); | ||
| fs.writeFileSync( | ||
| filePath, | ||
| JSON.stringify( | ||
| { | ||
| id: proofId, | ||
| timestamp: new Date().toISOString(), | ||
| ...result, | ||
| }, | ||
| null, | ||
| 2 | ||
| ), | ||
| "utf-8" | ||
| ); | ||
| console.log(`Saved proof result to: ${filePath}`); | ||
| } | ||
|
|
||
| // Load proof result from file | ||
| function loadProofResult(proofId: string) { | ||
| const filePath = path.join(PROOFS_DIR, `${proofId}.json`); | ||
| if (!fs.existsSync(filePath)) { | ||
| return null; | ||
| } | ||
| return JSON.parse(fs.readFileSync(filePath, "utf-8")); | ||
| } | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
Persisting full raw emails on disk has significant privacy/PII implications
saveProofResult stores the entire raw email (email.raw) plus proof data in .cache/proofs with no expiry, and /proof/:id serves it back to any caller that knows the UUID. That’s a lot of sensitive content (full Gmail message) persisted unencrypted and protected only by ID unguessability.
Consider tightening this by one or more of:
- Storing only the minimal email metadata needed downstream (e.g., headers or a hash) instead of full raw content.
- Adding configurable retention/cleanup (TTL on proof files, background janitor, or explicit delete endpoint).
- Encrypting proofs/emails at rest with a key that’s not baked into the image.
- Clearly documenting this behavior so operators can make an informed decision.
Given this is Gmail data, aligning with your threat model and compliance expectations here is important.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the /proof/:id endpoint definition and its handler
rg -n "proof|/proof" --type=ts -A 5 -B 2 | head -100
Length of output: 4499
🏁 Script executed:
#!/bin/bash
# Search for any cleanup, TTL, or expiry logic
rg -n "ttl|TTL|cleanup|expire|delete|removeSync|unlink" --type=ts -C 3Length of output: 2516
🏁 Script executed:
#!/bin/bash
# Search for encryption mechanisms
rg -n "encrypt|cipher|crypto" --type=ts -C 3Length of output: 332
🏁 Script executed:
#!/bin/bash
# Check the full structure of what's saved to proofs directory
rg -n "PROOFS_DIR|saveProofResult|writeFileSync" --type=ts -A 5 -B 2 | head -80Length of output: 3678
🏁 Script executed:
#!/bin/bash
# Verify CORS configuration
rg -n "CORS|cors|Access-Control" --type=ts -C 3Length of output: 782
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the /proof/:id route handler
rg -n "app\.get.*proof|app\.post.*proof|/proof" --type=ts -A 10 | head -120
Length of output: 434
🏁 Script executed:
#!/bin/bash
# Verify what data is passed to saveProofResult when called
rg -n "saveProofResult" --type=ts -B 5 -A 2Length of output: 979
🏁 Script executed:
#!/bin/bash
# Get the complete /proof/:id endpoint handler
rg -n "app\.get.*proof" --type=ts -A 40Length of output: 1718
Persisting full raw emails on disk has significant privacy/PII implications
saveProofResult stores the entire raw email (email.raw) plus proof data in .cache/proofs with no expiry, and /proof/:id serves it back to any caller that knows the UUID. That's a lot of sensitive content (full Gmail message) persisted unencrypted and protected only by ID unguessability.
Consider tightening this by one or more of:
- Storing only the minimal email metadata needed downstream (e.g., headers or a hash) instead of full raw content.
- Adding configurable retention/cleanup (TTL on proof files, background janitor, or explicit delete endpoint).
- Encrypting proofs/emails at rest with a key that's not baked into the image.
- Clearly documenting this behavior so operators can make an informed decision.
Given this is Gmail data, aligning with your threat model and compliance expectations here is important.
| app.get("/gmail/callback", async (req: Request, res: Response) => { | ||
| const code = req.query.code as string; | ||
| const stateParam = req.query.state as string | undefined; | ||
|
|
||
| // Generate unique proof ID | ||
| const proofId = randomUUID(); | ||
| console.log(`Starting proof generation with ID: ${proofId}`); | ||
|
|
||
| // Send HTML response with loading UI | ||
| res.setHeader("Content-Type", "text/html"); | ||
| res.write(callbackTemplate); | ||
|
|
||
| try { | ||
| if (!code) { | ||
| res.write(` | ||
| <script> | ||
| setStepError('step1'); | ||
| showError('Authorization code not provided'); | ||
| </script>`); | ||
| return res.end(); | ||
| } | ||
|
|
||
| // Parse state parameter | ||
| let query: string | undefined; | ||
| let blueprintSlug: string | undefined; | ||
| let command: string | undefined; | ||
| let handle: string | undefined; | ||
|
|
||
| if (stateParam) { | ||
| try { | ||
| const state = JSON.parse(stateParam); | ||
| query = state.query; | ||
| blueprintSlug = state.blueprint; | ||
| command = state.command; | ||
| handle = state.handle; | ||
| } catch (e) { | ||
| console.warn("Failed to parse state parameter:", e); | ||
| } | ||
| } | ||
|
|
||
| if (!query || !blueprintSlug || !command) { | ||
| res.write(` | ||
| <script> | ||
| setStepError('step1'); | ||
| showError('Missing required parameters: query, blueprint, or command'); | ||
| </script>`); | ||
| return res.end(); | ||
| } | ||
|
|
||
| // Exchange authorization code for tokens | ||
| const { tokens } = await oauth2Client.getToken(code); | ||
|
|
||
| if (!tokens.access_token) { | ||
| res.write(` | ||
| <script> | ||
| setStepError('step1'); | ||
| showError('Failed to obtain access token'); | ||
| </script>`); | ||
| return res.end(); | ||
| } | ||
|
|
||
| // Step 1 complete | ||
| res.write(` | ||
| <script> | ||
| setStepComplete('step1'); | ||
| setStepActive('step2'); | ||
| </script> | ||
| `); | ||
|
|
||
| // Fetch email from Gmail | ||
| console.log("Fetching email from Gmail..."); | ||
| const email = await fetchEmailFromGmail(tokens.access_token, query); | ||
|
|
||
| if (!email) { | ||
| res.write(` | ||
| <script> | ||
| setStepError('step2'); | ||
| showError('No matching email found'); | ||
| </script>`); | ||
| return res.end(); | ||
| } | ||
|
|
||
| // Step 2 complete | ||
| res.write(` | ||
| <script> | ||
| setStepComplete('step2'); | ||
| setStepActive('step3'); | ||
| </script> | ||
| `); | ||
|
|
||
| console.log("Email fetched successfully, generating proof..."); | ||
|
|
||
| // Generate proof | ||
| const proofResult = await getProof(email.raw, blueprintSlug, command); | ||
|
|
||
| console.log("Proof generated successfully"); | ||
|
|
||
| // Save proof result with ID | ||
| const result = { | ||
| email: { | ||
| id: email.id, | ||
| raw: email.raw, | ||
| }, | ||
| proof: proofResult.proof, | ||
| publicInputs: proofResult.publicInputs, | ||
| ...(handle && { handle }), | ||
| }; | ||
| saveProofResult(proofId, result); | ||
|
|
||
| // Steps 3 and 4 complete | ||
| res.write(` | ||
| <script> | ||
| setStepComplete('step3'); | ||
| setStepComplete('step4'); | ||
| showResult('${proofId}', '${email.id}', ${proofResult.proof.length}, ${proofResult.publicInputs.length}); | ||
| </script>`); | ||
| res.end(); | ||
| } catch (error) { | ||
| console.error("Error in Gmail callback:", error); | ||
| const errorMessage = | ||
| error instanceof Error ? error.message : "Unknown error occurred"; | ||
| res.write(` | ||
| <script> | ||
| setStepError('step1'); | ||
| setStepError('step2'); | ||
| setStepError('step3'); | ||
| setStepError('step4'); | ||
| showError('Error: ${errorMessage.replace(/'/g, "\\'")}'); | ||
| </script>`); | ||
| res.end(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
Be careful when injecting dynamic values into <script> blocks in /gmail/callback
In the callback you stream HTML and then write inline <script> tags that call setStepError(...), showError(...), and showResult(...). For example:
- Error path:
showError('Error: ${errorMessage.replace(/'/g, "\\'")}'); - Success path:
showResult('${proofId}', '${email.id}', ...)
Escaping only single quotes is brittle; if errorMessage (or, future changes) ever contains backslashes, newlines, or </script>, you can end up with broken JS or even XSS within your own origin.
Safer patterns:
- Embed data as JSON and parse in the browser, e.g.,
const err = ${JSON.stringify(errorMessage)}; showError('Error: ' + err); - Same for IDs/counts:
showResult(${JSON.stringify({ proofId, emailId: email.id, proofLen: ..., publicInputsLen: ... })});and adjustshowResultto accept an object.
That way the browser’s JSON parser handles all escaping and you’re not hand-rolling JS string escaping.
---
🏁 Script executed:
```shell
#!/bin/bash
# 1. Verify the exact code in src/server.ts at the specified lines
echo "=== Checking src/server.ts lines 455-585 ==="
if [ -f "src/server.ts" ]; then
sed -n '455,585p' src/server.ts
else
echo "File not found, searching for server.ts..."
find . -name "server.ts" -type f 2>/dev/null | head -5
fi
Length of output: 3363
🏁 Script executed:
#!/bin/bash
# 2. Search for similar inline script injection patterns with template literals
echo "=== Searching for similar script injection patterns ==="
rg -n 'res\.write.*<script>' --type=ts -A5 -B2Length of output: 55
🏁 Script executed:
#!/bin/bash
# 3. Check if callbackTemplate contains any injection points
echo "=== Searching for callbackTemplate definition ==="
rg -n 'callbackTemplate' --type=ts -B5 -A10 | head -50Length of output: 1521
🏁 Script executed:
#!/bin/bash
# 4. Check for JSON.stringify usage patterns in the codebase
echo "=== Checking existing JSON.stringify patterns ==="
rg -n 'JSON\.stringify' --type=ts -B2 -A2 | head -40Length of output: 1134
Rewrite inline script injections to use JSON serialization for safety
The callback endpoint contains XSS vulnerabilities from unsafe template literal injection into <script> blocks. Three injection points exist:
- Critical:
errorMessage.replace(/'/g, "\\'")(line 581) — only escapes single quotes; a message containing</script>, newlines, or backslashes breaks out of the string context - Secondary:
proofIdandemail.id(line 577) — passed as unescaped string literals
Replace with JSON serialization. For example:
- Error path:
const err = ${JSON.stringify(errorMessage)}; setStepError('step1'); showError('Error: ' + err); - Success path:
showResult(${JSON.stringify({ proofId, emailId: email.id, proofLen: proofResult.proof.length, publicInputsLen: proofResult.publicInputs.length })});(adjustshowResultto accept an object)
This allows the browser's JSON parser to handle all escaping, eliminating hand-rolled string escaping vulnerabilities.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
New Features
Documentation
Chores