Skip to content

Phase 1: Local MITM proxy with CA lifecycle and trace capture#2

Merged
GeiserX merged 4 commits intomainfrom
phase-1/mitm-proxy
Apr 14, 2026
Merged

Phase 1: Local MITM proxy with CA lifecycle and trace capture#2
GeiserX merged 4 commits intomainfrom
phase-1/mitm-proxy

Conversation

@GeiserX
Copy link
Copy Markdown
Owner

@GeiserX GeiserX commented Apr 14, 2026

Summary

  • CA lifecycle: RSA 4096 root CA + ECDSA P-256 leaf certs via @peculiar/x509, auto-generation on first run, expiry detection, secure key storage (0600)
  • MITM proxy engine: CONNECT proxy with per-domain TLS terminators using static certs (working pattern for Bun's node:tls limitations), bidirectional data relay via manual write() forwarding
  • Trace capture: HTTP/1.1 request/response parsing, SSE stream reassembly (Anthropic + OpenAI formats), Content-Length and chunked-terminator-based response completion detection, typed EventEmitter for trace events
  • Main process integration: CA init on startup, proxy start/stop on tray toggle, live trace counter in menu bar

Architecture

curl/app → CONNECT proxy (net.createServer, port 8443)
    → 200 OK → capture ClientHello
    → per-domain TLS server (tls.createServer, static leaf cert, ephemeral port)
    → decrypt → parse HTTP → relay to real upstream (tls.connect)
    → capture response → emit trace → relay back encrypted

New files

File Purpose
src/bun/ca/cert-generator.ts Root CA + leaf cert generation
src/bun/ca/ca-manager.ts CA lifecycle (ensure/regenerate/status)
src/bun/ca/cert-cache.ts LRU cert cache (100 domains, 25-day TTL)
src/bun/proxy/proxy-server.ts CONNECT proxy + per-domain TLS terminators
src/bun/proxy/sni-parser.ts SNI hostname extraction from ClientHello
src/bun/proxy/http-parser.ts HTTP/1.1 head parser + chunked decoder
src/bun/sse/sse-reassembler.ts SSE stream parser (Anthropic/OpenAI)
src/bun/capture/types.ts Trace/request/response interfaces
src/bun/capture/trace-emitter.ts Typed EventEmitter singleton

Test plan

  • CA generation: openssl verify chain validation passes
  • Proxy E2E: curl --proxy --cacert through proxy to api.anthropic.com returns 401 (expected without API key)
  • Trace capture: traceEmitter.on("trace") receives CapturedTrace with correct provider, domain, status, duration
  • Response completion: Content-Length detection finalizes trace without waiting for connection close
  • SSE streaming: test with real API key (Anthropic streaming response)
  • Chunked transfer: verify dechunk() with chunked responses
  • Cert cache: verify LRU eviction after 100 domains
  • CA expiry: verify auto-regeneration with expired cert

Summary by CodeRabbit

  • New Features

    • Local HTTPS MITM proxy with CONNECT-based TLS termination, per-domain certs, SNI extraction, request/response capture, SSE reassembly, and token usage extraction
    • Root CA generation/persistence, leaf cert generation and in-memory caching
    • UI: tray shows CA and proxy status; start/stop proxy flows and trace logging
  • Documentation

    • Updated docs with key learnings, runtime constraints, and operational guidance
  • Chores

    • Added runtime crypto/x509 dependencies

- CA generation (RSA 4096 root, ECDSA P-256 leaf) via @peculiar/x509
- CA manager with auto-generation, expiry detection, secure key storage
- LRU cert cache (max 100 domains, 25-day expiry)
- CONNECT proxy with per-domain TLS terminators (Bun-compatible pattern)
- SNI parser for ClientHello hostname extraction
- HTTP/1.1 parser with chunked transfer decoding
- SSE stream reassembler for Anthropic and OpenAI formats
- Trace emitter with typed EventEmitter for captured API traces
- Response completion detection via Content-Length/chunked terminator
- Main process wired with CA init, proxy toggle, trace counter
- AGENTS.md updated with key Bun TLS learnings
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 14, 2026

Warning

Rate limit exceeded

@GeiserX has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 29 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 29 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e9ebedba-b32d-48e9-bddf-f909872d582c

📥 Commits

Reviewing files that changed from the base of the PR and between aa4dc63 and fb76640.

⛔ Files ignored due to path filters (2)
  • src/views/mainview/assets/tray-icon-template.png is excluded by !**/*.png
  • src/views/mainview/assets/tray-icon-template@2x.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • AGENTS.md
📝 Walkthrough

Walkthrough

Adds a CONNECT-based TLS‑terminating MITM proxy with CA/key generation and leaf cert caching, HTTP/SSE parsing and reassembly, trace emission/types, SSE token extraction, tray integration for CA/proxy status, and related docs and runtime dependency declarations.

Changes

Cohort / File(s) Summary
Certificate Authority & Cert Generation
src/bun/ca/ca-manager.ts, src/bun/ca/cert-cache.ts, src/bun/ca/cert-generator.ts
Root CA persistence, expiration checking and regeneration; in-memory LRU leaf cert cache (25‑day TTL); PEM key/cert generation using WebCrypto and @peculiar/x509.
Proxy & TLS/SNI Parsing
src/bun/proxy/proxy-server.ts, src/bun/proxy/http-parser.ts, src/bun/proxy/sni-parser.ts
CONNECT proxy that creates per‑domain TLS terminators, extracts SNI from ClientHello, parses HTTP request/response heads and chunking, forwards upstream, handles SSE and non‑SSE completeness detection, and emits captured traces.
SSE Handling & Reassembly
src/bun/sse/sse-reassembler.ts
Parses SSE stream blocks, provider-specific reassembly for Anthropic/OpenAI, builds full bodies and extracts token usage when present.
Tracing Types & Emitter
src/bun/capture/trace-emitter.ts, src/bun/capture/types.ts
Typed TraceEmitter singleton and new interfaces for CapturedRequest/Response/SSEChunk/CapturedTrace.
App Integration & Types
src/bun/index.ts, src/shared/types.ts
Adds caStatus/proxyStatus to state, tray menu status items, capture toggle now controls proxy lifecycle, CA ensured at startup, and trace listener increments counter.
Configuration & Packaging
package.json, electrobun.config.ts
Adds runtime deps @peculiar/x509 and reflect-metadata; copies tray icon asset into build output.
Documentation
AGENTS.md
Updates IMPORTANT directive and adds “Key Learnings” covering Bun node:tls limitations, MITM workarounds, HTTP completion heuristics, dependency/runtime notes, and ElectroBun UI caveats.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 19.44% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main change: implementing a local MITM proxy with CA lifecycle management and trace capture, which matches the changeset's core additions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch phase-1/mitm-proxy

Comment @coderabbitai help to get the list of available commands and usage tips.

- Regenerate tray icon as black silhouette with alpha (macOS template)
- Add @2x retina variant (44x44)
- Remove title text from menu bar (icon only)
- Remove setTitle calls that showed "AgentTap (ON)" text
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/bun/ca/ca-manager.ts`:
- Around line 31-42: The current load path reads CA_KEY_PATH/CA_CERT_PATH and
blindly returns them when isCertExpired(cert) is false, which lets corrupted
ca.pem be used; update the CA loading logic in ca-manager.ts (the block using
CA_KEY_PATH, CA_CERT_PATH, isCertExpired, regenerateCA, loaded, CA_DIR) to
validate readability and parseability of both key and cert before accepting
them: wrap the Bun.file(...).text() and certificate validation in a try/catch
(or otherwise check parse success), treat any I/O or parse error as an
expired/invalid cert and call regenerateCA(), and apply the same defensive
checks to the later similar block around lines 75-83 so unreadable/corrupted
files trigger regeneration instead of being loaded.
- Around line 12-20: The CA private key is being persisted under HOME via the
CA_DIR/CA_KEY_PATH/CA_CERT_PATH constants; change CA_DIR to a path inside the
application's sandboxed container (e.g., the app's data/userData directory or a
dedicated secure storage directory provided by the runtime) and update
CA_KEY_PATH and CA_CERT_PATH to join against that sandboxed CA_DIR instead of
process.env.HOME; ensure the code that reads/writes the key (references to
CA_KEY_PATH and CA_CERT_PATH) uses the new location, add a migration step to
move existing keys into the sandbox if present, and enforce strict file
permissions when creating the key files.

In `@src/bun/ca/cert-cache.ts`:
- Around line 19-39: Current code can trigger multiple simultaneous
generateLeafCert calls for the same domain; add a deduplication layer by
introducing an inFlight map (e.g., inFlightPromises: Map<string,
Promise<CertificatePair>>) and check it before calling generateLeafCert in the
cache-get path, so if inFlightPromises.get(domain) exists you await and return
that promise's result instead of starting a new generation; when starting
generation, set inFlightPromises.set(domain, promise), await it, then remove the
entry in finally and only then write the resolved pair into cache (using the
existing cache.set logic and evictLRU/MAX_CACHE_SIZE handling); also ensure
rejected promises are removed from inFlightPromises so subsequent requests can
retry.

In `@src/bun/ca/cert-generator.ts`:
- Around line 116-121: pemToDer currently returns the underlying Buffer's entire
ArrayBuffer which may include extra bytes or an offset and can break
crypto.subtle.importKey; change pemToDer to create the base64-decoded Buffer,
then return a sliced ArrayBuffer that uses the Buffer's byteOffset and
byteLength (i.e., slice buf.buffer from buf.byteOffset to buf.byteOffset +
buf.byteLength) so the returned ArrayBuffer contains exactly the DER bytes
consumed by crypto.subtle.importKey.

In `@src/bun/proxy/proxy-server.ts`:
- Around line 192-200: The upstream close handler currently calls
clientSocket.destroy(), which aborts and discards buffered writes when
upstream.destroy() triggers; change the handler to call clientSocket.end() (or
use clientSocket.end() with a callback) so the socket performs a graceful close
and flushes any queued response bytes. Locate the upstream 'close'/'end' event
handler that invokes clientSocket.destroy() (referencing clientSocket and
upstream.destroy in proxy-server.ts) and replace destroy() calls with end(),
ensuring both places where upstream.destroy() is invoked (the non-chunked and
chunked response branches around where finalize() is called) use the updated
graceful close behavior.
- Around line 91-100: The promise returned around connectProxy.listen never
rejects on bind errors because the "error" handler only logs; update the Promise
in startProxy so its executor accepts (resolve, reject), attach a one-time
"error" listener on connectProxy that calls reject(err) (or rejects with a
descriptive Error) and a one-time "listening" listener that calls resolve();
ensure you remove the opposite listener on success/failure (or use once) so
connectProxy's "error" handler no longer just logs but propagates the failure to
the Promise; reference connectProxy and the startProxy promise/listen call to
locate where to add reject behavior.
- Around line 56-84: Wrap the async CONNECT handler (the callback passed to
clientSocket.once("data", async (chunk) => {...})) in a try/catch and handle
failures from getOrCreateDomainTLS(domain) and any TLS initialization steps: on
error, write an HTTP/1.1 502 Bad Gateway response (e.g.
clientSocket.write("HTTP/1.1 502 Bad Gateway\r\n\r\n")), then destroy/end the
client socket; ensure any resources (like any partially created local sockets)
are cleaned up and that promise rejections from getOrCreateDomainTLS are caught
so they don't remain unhandled.

In `@src/bun/proxy/sni-parser.ts`:
- Around line 25-27: The code reads a 16-bit length with
data.readUInt16BE(offset) into cipherSuitesLen without first ensuring there are
at least 2 bytes available; add a bounds guard before that call (e.g., check
offset + 2 <= data.length or offset <= data.length - 2) and return null if the
buffer is too short, then proceed to advance offset and validate the full cipher
suites slice (offset + cipherSuitesLen) fits within data.length before using it;
update the logic around the cipherSuitesLen/readUInt16BE usage in the SNI/TLS
parsing function so it gracefully handles truncated input instead of throwing
RangeError.

In `@src/bun/sse/sse-reassembler.ts`:
- Line 16: The split in sse-reassembler.ts currently uses raw.split("\n\n")
(assigned to blocks) which fails for CRLF-delimited streams; normalize CRLF or
use a regex like raw.split(/\r?\n\r?\n/) (or first replace(/\r\n/g, "\n") then
split("\n\n")) to correctly separate SSE events, and apply the same change to
the other parsing block(s) in the same file (the code region around lines 25-38)
so all places that split raw event chunks handle both "\n\n" and "\r\n\r\n".
- Around line 51-55: The provider detection is using case-sensitive includes on
domain which can miss mixed-case hostnames; normalize domain to lowercase before
checks by replacing usages of domain.includes("anthropic") and
domain.includes("openai") with a lowercase comparison (e.g., compute a local
lowercasedDomain = domain.toLowerCase() and use
lowercasedDomain.includes("anthropic") / lowercasedDomain.includes("openai")),
and update any related branches (reassembleAnthropic, reassembleOpenAI) to use
this normalized value so provider-specific reassembly runs reliably for
mixed-case hostnames.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 181f8af8-571c-487e-92b6-8cef3ade3d72

📥 Commits

Reviewing files that changed from the base of the PR and between e2830b0 and 45f923f.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (13)
  • AGENTS.md
  • package.json
  • src/bun/ca/ca-manager.ts
  • src/bun/ca/cert-cache.ts
  • src/bun/ca/cert-generator.ts
  • src/bun/capture/trace-emitter.ts
  • src/bun/capture/types.ts
  • src/bun/index.ts
  • src/bun/proxy/http-parser.ts
  • src/bun/proxy/proxy-server.ts
  • src/bun/proxy/sni-parser.ts
  • src/bun/sse/sse-reassembler.ts
  • src/shared/types.ts

Comment thread src/bun/ca/ca-manager.ts
Comment on lines +12 to +20
const CA_DIR = join(
process.env.HOME ?? "~",
"Library",
"Application Support",
"AgentTap",
"ca",
);
const CA_KEY_PATH = join(CA_DIR, "ca-key.pem");
const CA_CERT_PATH = join(CA_DIR, "ca.pem");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not persist the CA private key under $HOME/Library/....

This stores the MITM root key outside the app container, which weakens the trust boundary around the most sensitive secret in the system. Based on learnings, Never store the custom CA private key outside the app's sandboxed container.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/ca/ca-manager.ts` around lines 12 - 20, The CA private key is being
persisted under HOME via the CA_DIR/CA_KEY_PATH/CA_CERT_PATH constants; change
CA_DIR to a path inside the application's sandboxed container (e.g., the app's
data/userData directory or a dedicated secure storage directory provided by the
runtime) and update CA_KEY_PATH and CA_CERT_PATH to join against that sandboxed
CA_DIR instead of process.env.HOME; ensure the code that reads/writes the key
(references to CA_KEY_PATH and CA_CERT_PATH) uses the new location, add a
migration step to move existing keys into the sandbox if present, and enforce
strict file permissions when creating the key files.

Comment thread src/bun/ca/ca-manager.ts
Comment on lines +31 to +42
if (existsSync(CA_KEY_PATH) && existsSync(CA_CERT_PATH)) {
const key = await Bun.file(CA_KEY_PATH).text();
const cert = await Bun.file(CA_CERT_PATH).text();

if (isCertExpired(cert)) {
console.log("[CA] Certificate expired, regenerating...");
return regenerateCA();
}

loaded = { key, cert };
console.log("[CA] Loaded existing CA from", CA_DIR);
return loaded;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat unreadable CA certs as invalid.

If ca.pem is corrupted, isCertExpired() returns false, so Lines 31-42 load broken material instead of regenerating it. That defers the failure into proxy startup and leaf issuance.

Suggested fix
 function isCertExpired(pem: string): boolean {
   try {
     const crypto = require("node:crypto");
     const cert = new crypto.X509Certificate(pem);
     return new Date(cert.validTo) < new Date();
   } catch {
-    return false;
+    return true;
   }
 }

Also applies to: 75-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/ca/ca-manager.ts` around lines 31 - 42, The current load path reads
CA_KEY_PATH/CA_CERT_PATH and blindly returns them when isCertExpired(cert) is
false, which lets corrupted ca.pem be used; update the CA loading logic in
ca-manager.ts (the block using CA_KEY_PATH, CA_CERT_PATH, isCertExpired,
regenerateCA, loaded, CA_DIR) to validate readability and parseability of both
key and cert before accepting them: wrap the Bun.file(...).text() and
certificate validation in a try/catch (or otherwise check parse success), treat
any I/O or parse error as an expired/invalid cert and call regenerateCA(), and
apply the same defensive checks to the later similar block around lines 75-83 so
unreadable/corrupted files trigger regeneration instead of being loaded.

Comment thread src/bun/ca/cert-cache.ts
Comment on lines +19 to +39
const existing = cache.get(domain);
const now = Date.now();

if (existing && existing.expiresAt > now) {
existing.lastUsed = now;
return existing.pair;
}

const pair = await generateLeafCert(domain, caCert, caKey);

if (cache.size >= MAX_CACHE_SIZE) {
evictLRU();
}

cache.set(domain, {
pair,
expiresAt: now + 25 * 86400_000, // 25 days (leaf certs valid 30)
lastUsed: now,
});

return pair;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Deduplicate concurrent cache misses per domain.

Multiple simultaneous misses for the same domain will each generate a new cert. This is avoidable work on a hot path.

Proposed fix
 const cache = new Map<string, CacheEntry>();
+const inflight = new Map<string, Promise<KeyCertPair>>();
@@
   if (existing && existing.expiresAt > now) {
     existing.lastUsed = now;
     return existing.pair;
   }
 
-  const pair = await generateLeafCert(domain, caCert, caKey);
+  const pending = inflight.get(domain);
+  if (pending) return pending;
+
+  const generation = generateLeafCert(domain, caCert, caKey).finally(() => {
+    inflight.delete(domain);
+  });
+  inflight.set(domain, generation);
+  const pair = await generation;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const existing = cache.get(domain);
const now = Date.now();
if (existing && existing.expiresAt > now) {
existing.lastUsed = now;
return existing.pair;
}
const pair = await generateLeafCert(domain, caCert, caKey);
if (cache.size >= MAX_CACHE_SIZE) {
evictLRU();
}
cache.set(domain, {
pair,
expiresAt: now + 25 * 86400_000, // 25 days (leaf certs valid 30)
lastUsed: now,
});
return pair;
const cache = new Map<string, CacheEntry>();
const inflight = new Map<string, Promise<KeyCertPair>>();
const existing = cache.get(domain);
const now = Date.now();
if (existing && existing.expiresAt > now) {
existing.lastUsed = now;
return existing.pair;
}
const pending = inflight.get(domain);
if (pending) return pending;
const generation = generateLeafCert(domain, caCert, caKey).finally(() => {
inflight.delete(domain);
});
inflight.set(domain, generation);
const pair = await generation;
if (cache.size >= MAX_CACHE_SIZE) {
evictLRU();
}
cache.set(domain, {
pair,
expiresAt: now + 25 * 86400_000, // 25 days (leaf certs valid 30)
lastUsed: now,
});
return pair;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/ca/cert-cache.ts` around lines 19 - 39, Current code can trigger
multiple simultaneous generateLeafCert calls for the same domain; add a
deduplication layer by introducing an inFlight map (e.g., inFlightPromises:
Map<string, Promise<CertificatePair>>) and check it before calling
generateLeafCert in the cache-get path, so if inFlightPromises.get(domain)
exists you await and return that promise's result instead of starting a new
generation; when starting generation, set inFlightPromises.set(domain, promise),
await it, then remove the entry in finally and only then write the resolved pair
into cache (using the existing cache.set logic and evictLRU/MAX_CACHE_SIZE
handling); also ensure rejected promises are removed from inFlightPromises so
subsequent requests can retry.

Comment on lines +116 to +121
function pemToDer(pem: string): ArrayBuffer {
const b64 = pem
.replace(/-----BEGIN [\w\s]+-----/, "")
.replace(/-----END [\w\s]+-----/, "")
.replace(/\s/g, "");
return Buffer.from(b64, "base64").buffer;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the actual file content at the specified lines
cat -n src/bun/ca/cert-generator.ts | sed -n '110,130p'

Repository: GeiserX/AgentTap

Length of output: 496


🏁 Script executed:

# Check if pemToDer is used with crypto.subtle.importKey
rg -n "pemToDer|importKey" src/bun/ca/cert-generator.ts -A 3 -B 1

Repository: GeiserX/AgentTap

Length of output: 421


🏁 Script executed:

# Understand Buffer semantics with offset - let's check Node/Bun docs or any test files
fd -t f "buffer|crypto" src --type f | head -20

Repository: GeiserX/AgentTap

Length of output: 42


Return the exact DER slice from pemToDer().

Buffer.from(b64, "base64").buffer exposes the entire backing ArrayBuffer. When passed to crypto.subtle.importKey() at line 68, this can cause CA key import to fail if the underlying buffer has any offset or extra data beyond the valid DER bytes.

Suggested fix
 function pemToDer(pem: string): ArrayBuffer {
   const b64 = pem
     .replace(/-----BEGIN [\w\s]+-----/, "")
     .replace(/-----END [\w\s]+-----/, "")
     .replace(/\s/g, "");
-  return Buffer.from(b64, "base64").buffer;
+  const der = Buffer.from(b64, "base64");
+  return der.buffer.slice(der.byteOffset, der.byteOffset + der.byteLength);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function pemToDer(pem: string): ArrayBuffer {
const b64 = pem
.replace(/-----BEGIN [\w\s]+-----/, "")
.replace(/-----END [\w\s]+-----/, "")
.replace(/\s/g, "");
return Buffer.from(b64, "base64").buffer;
function pemToDer(pem: string): ArrayBuffer {
const b64 = pem
.replace(/-----BEGIN [\w\s]+-----/, "")
.replace(/-----END [\w\s]+-----/, "")
.replace(/\s/g, "");
const der = Buffer.from(b64, "base64");
return der.buffer.slice(der.byteOffset, der.byteOffset + der.byteLength);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/ca/cert-generator.ts` around lines 116 - 121, pemToDer currently
returns the underlying Buffer's entire ArrayBuffer which may include extra bytes
or an offset and can break crypto.subtle.importKey; change pemToDer to create
the base64-decoded Buffer, then return a sliced ArrayBuffer that uses the
Buffer's byteOffset and byteLength (i.e., slice buf.buffer from buf.byteOffset
to buf.byteOffset + buf.byteLength) so the returned ArrayBuffer contains exactly
the DER bytes consumed by crypto.subtle.importKey.

Comment on lines +56 to +84
clientSocket.once("data", async (chunk) => {
const headStr = chunk.toString("latin1");

if (headStr.startsWith("CONNECT ")) {
const target = headStr.split("\r\n")[0].split(" ")[1];
const domain = target.split(":")[0];

// Get or create a per-domain TLS terminator
const tlsPort = await getOrCreateDomainTLS(domain);

// Send 200 to establish tunnel
clientSocket.write("HTTP/1.1 200 Connection Established\r\n\r\n");

// Wait for the TLS ClientHello, then bridge to the domain's TLS server
clientSocket.once("data", (clientHello) => {
const local = net.connect({ host: "127.0.0.1", port: tlsPort }, () => {
// Forward the ClientHello
local.write(clientHello);

// Bidirectional relay for the rest of the TLS handshake + data
clientSocket.on("data", (c) => local.write(c));
local.on("data", (c) => clientSocket.write(c));
local.on("end", () => clientSocket.end());
clientSocket.on("end", () => local.end());
});

local.on("error", () => clientSocket.destroy());
clientSocket.on("error", () => local.destroy());
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "proxy-server.ts" -type f 2>/dev/null | head -20

Repository: GeiserX/AgentTap

Length of output: 91


🏁 Script executed:

cd src/bun/proxy 2>/dev/null && wc -l proxy-server.ts && echo "---" && sed -n '56,84p' proxy-server.ts

Repository: GeiserX/AgentTap

Length of output: 1308


🏁 Script executed:

cd src/bun/proxy 2>/dev/null && sed -n '1,110p' proxy-server.ts | cat -n

Repository: GeiserX/AgentTap

Length of output: 4627


Add try/catch around CONNECT setup to handle TLS initialization failures gracefully.

The await getOrCreateDomainTLS(domain) call at line 64 lacks error handling. If leaf certificate generation or TLS server startup fails, the promise rejection goes unhandled and the client never receives an error response. Wrap the entire async callback (lines 56–87) in try/catch to send a 502 response on failure, per the requirement that all proxy operations must handle TLS errors gracefully.

Suggested fix
-    clientSocket.once("data", async (chunk) => {
-      const headStr = chunk.toString("latin1");
+    clientSocket.once("data", async (chunk) => {
+      try {
+        const headStr = chunk.toString("latin1");
 
-      if (headStr.startsWith("CONNECT ")) {
-        const target = headStr.split("\r\n")[0].split(" ")[1];
-        const domain = target.split(":")[0];
+        if (headStr.startsWith("CONNECT ")) {
+          const target = headStr.split("\r\n")[0].split(" ")[1];
+          const domain = target.split(":")[0];
 
-        // Get or create a per-domain TLS terminator
-        const tlsPort = await getOrCreateDomainTLS(domain);
+          const tlsPort = await getOrCreateDomainTLS(domain);
 
-        // Send 200 to establish tunnel
-        clientSocket.write("HTTP/1.1 200 Connection Established\r\n\r\n");
+          clientSocket.write("HTTP/1.1 200 Connection Established\r\n\r\n");
 
-        // Wait for the TLS ClientHello, then bridge to the domain's TLS server
-        clientSocket.once("data", (clientHello) => {
-          const local = net.connect({ host: "127.0.0.1", port: tlsPort }, () => {
-            // Forward the ClientHello
-            local.write(clientHello);
+          clientSocket.once("data", (clientHello) => {
+            const local = net.connect({ host: "127.0.0.1", port: tlsPort }, () => {
+              local.write(clientHello);
+            });
 
-            // Bidirectional relay for the rest of the TLS handshake + data
-            clientSocket.on("data", (c) => local.write(c));
-            local.on("data", (c) => clientSocket.write(c));
-            local.on("end", () => clientSocket.end());
-            clientSocket.on("end", () => local.end());
+            clientSocket.on("data", (c) => local.write(c));
+            local.on("data", (c) => clientSocket.write(c));
+            local.on("end", () => clientSocket.end());
+            clientSocket.on("end", () => local.end());
+            local.on("error", () => clientSocket.destroy());
+            clientSocket.on("error", () => local.destroy());
           });
-
-          local.on("error", () => clientSocket.destroy());
-          clientSocket.on("error", () => local.destroy());
-        });
-      } else {
-        clientSocket.end("HTTP/1.1 400 Bad Request\r\n\r\nOnly CONNECT supported\r\n");
-      }
+        } else {
+          clientSocket.end("HTTP/1.1 400 Bad Request\r\n\r\nOnly CONNECT supported\r\n");
+        }
+      } catch (err) {
+        console.error("[Proxy] CONNECT setup failed:", err);
+        clientSocket.end("HTTP/1.1 502 Bad Gateway\r\n\r\n");
+      }
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/proxy/proxy-server.ts` around lines 56 - 84, Wrap the async CONNECT
handler (the callback passed to clientSocket.once("data", async (chunk) =>
{...})) in a try/catch and handle failures from getOrCreateDomainTLS(domain) and
any TLS initialization steps: on error, write an HTTP/1.1 502 Bad Gateway
response (e.g. clientSocket.write("HTTP/1.1 502 Bad Gateway\r\n\r\n")), then
destroy/end the client socket; ensure any resources (like any partially created
local sockets) are cleaned up and that promise rejections from
getOrCreateDomainTLS are caught so they don't remain unhandled.

Comment on lines +91 to +100
connectProxy.on("error", (err) => {
console.error("[Proxy] Error:", err.message);
});

await new Promise<void>((resolve) => {
connectProxy!.listen(port, "127.0.0.1", () => {
console.log(`[Proxy] CONNECT proxy on 127.0.0.1:${port}`);
resolve();
});
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "proxy-server.ts" | head -20

Repository: GeiserX/AgentTap

Length of output: 91


🏁 Script executed:

cat -n ./src/bun/proxy/proxy-server.ts | sed -n '80,110p'

Repository: GeiserX/AgentTap

Length of output: 1097


🏁 Script executed:

cat -n ./src/bun/proxy/proxy-server.ts | sed -n '1,95p'

Repository: GeiserX/AgentTap

Length of output: 4108


Reject startProxy() when listen() fails.

If bind fails, the error handler (lines 91–93) only logs—it doesn't reject the promise. startProxy() never settles, leaving the tray stuck in "starting" and the caller unable to reach its .catch() path.

Suggested fix
-  await new Promise<void>((resolve) => {
-    connectProxy!.listen(port, "127.0.0.1", () => {
-      console.log(`[Proxy] CONNECT proxy on 127.0.0.1:${port}`);
-      resolve();
-    });
-  });
+  await new Promise<void>((resolve, reject) => {
+    const server = connectProxy!;
+    const onError = (err: Error) => reject(err);
+    server.once("error", onError);
+    server.listen(port, "127.0.0.1", () => {
+      server.off("error", onError);
+      console.log(`[Proxy] CONNECT proxy on 127.0.0.1:${port}`);
+      resolve();
+    });
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/proxy/proxy-server.ts` around lines 91 - 100, The promise returned
around connectProxy.listen never rejects on bind errors because the "error"
handler only logs; update the Promise in startProxy so its executor accepts
(resolve, reject), attach a one-time "error" listener on connectProxy that calls
reject(err) (or rejects with a descriptive Error) and a one-time "listening"
listener that calls resolve(); ensure you remove the opposite listener on
success/failure (or use once) so connectProxy's "error" handler no longer just
logs but propagates the failure to the Promise; reference connectProxy and the
startProxy promise/listen call to locate where to add reject behavior.

Comment on lines +192 to +200
if (contentLength >= 0 && bodyBytes >= contentLength) {
finalize();
upstream.destroy();
} else if (isChunked) {
const fullBuf = Buffer.concat(responseChunks as unknown as Uint8Array[]);
const bodyBuf = fullBuf.subarray(headerEndIndex);
if (bodyBuf.includes(CHUNKED_TERMINATOR)) {
finalize();
upstream.destroy();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "proxy-server.ts" -path "*/src/bun/proxy/*" | head -5

Repository: GeiserX/AgentTap

Length of output: 91


🏁 Script executed:

cat -n src/bun/proxy/proxy-server.ts | sed -n '180,210p'

Repository: GeiserX/AgentTap

Length of output: 1367


🏁 Script executed:

cat -n src/bun/proxy/proxy-server.ts | sed -n '230,250p'

Repository: GeiserX/AgentTap

Length of output: 959


Replace clientSocket.destroy() with graceful close in upstream close handler.

Line 243 calls destroy() on the client socket, which abruptly terminates the connection and discards buffered writes. When upstream.destroy() is triggered at lines 194 or 200, it fires the close event that immediately calls clientSocket.destroy(), potentially losing response bytes still in the write queue. Use clientSocket.end() instead to flush buffered data before closing.

Fix
-  upstream.on("close", () => { finalize(); clientSocket.destroy(); });
+  upstream.on("close", () => {
+    finalize();
+    if (!clientSocket.writableEnded) clientSocket.end();
+  });

Also applies to: 242-243

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/proxy/proxy-server.ts` around lines 192 - 200, The upstream close
handler currently calls clientSocket.destroy(), which aborts and discards
buffered writes when upstream.destroy() triggers; change the handler to call
clientSocket.end() (or use clientSocket.end() with a callback) so the socket
performs a graceful close and flushes any queued response bytes. Locate the
upstream 'close'/'end' event handler that invokes clientSocket.destroy()
(referencing clientSocket and upstream.destroy in proxy-server.ts) and replace
destroy() calls with end(), ensuring both places where upstream.destroy() is
invoked (the non-chunked and chunked response branches around where finalize()
is called) use the updated graceful close behavior.

Comment on lines +25 to +27
const cipherSuitesLen = data.readUInt16BE(offset);
offset += 2 + cipherSuitesLen;
if (offset >= data.length) return null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify current guard coverage around cipher-suite length parsing
rg -n -C2 'Cipher suites|readUInt16BE\(offset\)' src/bun/proxy/sni-parser.ts

Repository: GeiserX/AgentTap

Length of output: 650


🏁 Script executed:

cat -n src/bun/proxy/sni-parser.ts | head -70

Repository: GeiserX/AgentTap

Length of output: 2556


🌐 Web query:

Node.js Buffer readUInt16BE RangeError out of bounds behavior

💡 Result:

When calling buf.readUInt16BE(offset) on a Node.js Buffer where the offset is out of bounds (i.e., offset < 0 or offset > buf.length - 2), it throws a RangeError with the message "The value of "offset" is out of range. It must be >= 0 and <= [buf.length - 2]. Received [offset]". This behavior is consistent across Node.js versions, including recent ones like v25.9.0. The official documentation states that offset must satisfy 0 <= offset <= buf.length - 2, and examples explicitly show it throwing ERR_OUT_OF_RANGE for invalid offsets, such as buf.readUInt16BE(2) on a 3-byte buffer. In some contexts or older versions, out-of-bounds access might manifest as ERR_BUFFER_OUT_OF_BOUNDS, but for readUInt16BE specifically, the error is ERR_OUT_OF_RANGE from the bounds check on the offset parameter before reading. Example from sources: const buf = Buffer.from([0x12, 0x34, 0x56]); console.log(buf.readUInt16BE(2).toString(16)); // Throws ERR_OUT_OF_RANGE RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 2. Received 3 at boundsError (internal/buffer.js:49:9) at Buffer.readUInt16BE (internal/buffer.js:215:5)

Citations:


Add bounds guard before readUInt16BE(offset) on line 25.

Line 25 throws RangeError when offset + 2 > data.length because readUInt16BE requires offset ≤ length - 2. The check on line 22 only validates offset < length, which is insufficient. The function design expects graceful handling of truncated input.

Proposed fix
   // Cipher suites: length(2) + data
+  if (offset + 2 > data.length) return null;
   const cipherSuitesLen = data.readUInt16BE(offset);
   offset += 2 + cipherSuitesLen;
   if (offset >= data.length) return null;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const cipherSuitesLen = data.readUInt16BE(offset);
offset += 2 + cipherSuitesLen;
if (offset >= data.length) return null;
if (offset + 2 > data.length) return null;
const cipherSuitesLen = data.readUInt16BE(offset);
offset += 2 + cipherSuitesLen;
if (offset >= data.length) return null;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/proxy/sni-parser.ts` around lines 25 - 27, The code reads a 16-bit
length with data.readUInt16BE(offset) into cipherSuitesLen without first
ensuring there are at least 2 bytes available; add a bounds guard before that
call (e.g., check offset + 2 <= data.length or offset <= data.length - 2) and
return null if the buffer is too short, then proceed to advance offset and
validate the full cipher suites slice (offset + cipherSuitesLen) fits within
data.length before using it; update the logic around the
cipherSuitesLen/readUInt16BE usage in the SNI/TLS parsing function so it
gracefully handles truncated input instead of throwing RangeError.

*/
export function parseSSEStream(raw: string): SSEChunk[] {
const chunks: SSEChunk[] = [];
const blocks = raw.split("\n\n");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle CRLF SSE delimiters to avoid chunk coalescing.

Line 16 only splits on "\n\n". Streams using "\r\n\r\n" won’t split correctly, so multiple SSE events can collapse into one parsed chunk.

Proposed fix
 export function parseSSEStream(raw: string): SSEChunk[] {
   const chunks: SSEChunk[] = [];
-  const blocks = raw.split("\n\n");
+  const normalized = raw.replace(/\r\n/g, "\n");
+  const blocks = normalized.split(/\n\n+/);
@@
-    for (const line of block.split("\n")) {
+    for (const line of block.split("\n")) {

Also applies to: 25-38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/sse/sse-reassembler.ts` at line 16, The split in sse-reassembler.ts
currently uses raw.split("\n\n") (assigned to blocks) which fails for
CRLF-delimited streams; normalize CRLF or use a regex like
raw.split(/\r?\n\r?\n/) (or first replace(/\r\n/g, "\n") then split("\n\n")) to
correctly separate SSE events, and apply the same change to the other parsing
block(s) in the same file (the code region around lines 25-38) so all places
that split raw event chunks handle both "\n\n" and "\r\n\r\n".

Comment on lines +51 to +55
if (domain.includes("anthropic")) {
return reassembleAnthropic(chunks);
}
if (domain.includes("openai")) {
return reassembleOpenAI(chunks);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize domain before provider detection.

Line 51 and Line 54 perform case-sensitive checks. Hostnames are case-insensitive, so mixed-case inputs can skip provider-specific reassembly.

Proposed fix
 export function reassembleSSE(
   chunks: SSEChunk[],
   domain: string,
 ): ReassembledSSE {
-  if (domain.includes("anthropic")) {
+  const normalizedDomain = domain.toLowerCase();
+  if (normalizedDomain.includes("anthropic")) {
     return reassembleAnthropic(chunks);
   }
-  if (domain.includes("openai")) {
+  if (normalizedDomain.includes("openai")) {
     return reassembleOpenAI(chunks);
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (domain.includes("anthropic")) {
return reassembleAnthropic(chunks);
}
if (domain.includes("openai")) {
return reassembleOpenAI(chunks);
export function reassembleSSE(
chunks: SSEChunk[],
domain: string,
): ReassembledSSE {
const normalizedDomain = domain.toLowerCase();
if (normalizedDomain.includes("anthropic")) {
return reassembleAnthropic(chunks);
}
if (normalizedDomain.includes("openai")) {
return reassembleOpenAI(chunks);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/sse/sse-reassembler.ts` around lines 51 - 55, The provider detection
is using case-sensitive includes on domain which can miss mixed-case hostnames;
normalize domain to lowercase before checks by replacing usages of
domain.includes("anthropic") and domain.includes("openai") with a lowercase
comparison (e.g., compute a local lowercasedDomain = domain.toLowerCase() and
use lowercasedDomain.includes("anthropic") /
lowercasedDomain.includes("openai")), and update any related branches
(reassembleAnthropic, reassembleOpenAI) to use this normalized value so
provider-specific reassembly runs reliably for mixed-case hostnames.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/bun/index.ts`:
- Around line 81-86: The tray item currently sets label to only the bind address
using state.proxyPort so the proxyStatus (starting/running/error) never appears;
update the tray item construction (the object with type:"normal", label,
action:"proxy-port", enabled:false) to include state.proxyStatus in the label
(or map state.proxyStatus to a human string) so the label becomes e.g. `Proxy:
127.0.0.1:${state.proxyPort} — ${state.proxyStatus}` (or similar), ensuring you
reference the proxyStatus property from state where the tray items are built.
- Around line 188-195: The log is printing raw trace.request.url (which may
contain sensitive query params); update the trace handler registered in
traceEmitter.on("trace", ...) to redact the URL's query string before logging:
parse trace.request.url (using URL in a try/catch to handle absolute URLs and
fall back to stripping anything after '?' for non-absolute values), replace the
query portion with either an empty string or a placeholder like "?REDACTED", and
use that redactedUrl in the console.log call while leaving the rest of the
message (provider, model, method, status, duration, tokens) unchanged.
- Around line 122-133: Prevent stuck "starting" and duplicate starts by first
guarding in the start path: before calling startProxy, check state.captureStatus
(and/or state.proxyStatus) and return early if captureStatus !== "inactive" or
proxyStatus !== "idle" to avoid scheduling multiple starts; then wrap the
startProxy(state.proxyPort) call with a local timeout watchdog that, if the
start promise neither resolves nor rejects within a short window (e.g., 10s),
sets state.captureStatus = "error", state.proxyStatus = "error", calls
buildTrayMenu(), and clears the watchdog once the promise settles; keep
references to state.proxyStatus, state.captureStatus, startProxy and
buildTrayMenu so the fix is easy to locate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d03e1b36-4813-40f9-93eb-721edc39a26a

📥 Commits

Reviewing files that changed from the base of the PR and between 45f923f and aa4dc63.

⛔ Files ignored due to path filters (2)
  • src/views/mainview/assets/tray-icon-template.png is excluded by !**/*.png
  • src/views/mainview/assets/tray-icon-template@2x.png is excluded by !**/*.png
📒 Files selected for processing (2)
  • electrobun.config.ts
  • src/bun/index.ts
✅ Files skipped from review due to trivial changes (1)
  • electrobun.config.ts

Comment thread src/bun/index.ts
Comment on lines +81 to +86
{
type: "normal",
label: `Proxy: 127.0.0.1:${state.proxyPort}`,
action: "proxy-port",
enabled: false,
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Render proxyStatus in the tray item.

Line 83 only shows the bind address, so starting / running / error never becomes visible even though the state tracks it.

Proposed fix
     {
       type: "normal",
-      label: `Proxy: 127.0.0.1:${state.proxyPort}`,
+      label: `Proxy: ${state.proxyStatus} · 127.0.0.1:${state.proxyPort}`,
       action: "proxy-port",
       enabled: false,
     },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
type: "normal",
label: `Proxy: 127.0.0.1:${state.proxyPort}`,
action: "proxy-port",
enabled: false,
},
{
type: "normal",
label: `Proxy: ${state.proxyStatus} · 127.0.0.1:${state.proxyPort}`,
action: "proxy-port",
enabled: false,
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/index.ts` around lines 81 - 86, The tray item currently sets label to
only the bind address using state.proxyPort so the proxyStatus
(starting/running/error) never appears; update the tray item construction (the
object with type:"normal", label, action:"proxy-port", enabled:false) to include
state.proxyStatus in the label (or map state.proxyStatus to a human string) so
the label becomes e.g. `Proxy: 127.0.0.1:${state.proxyPort} —
${state.proxyStatus}` (or similar), ensuring you reference the proxyStatus
property from state where the tray items are built.

Comment thread src/bun/index.ts
Comment on lines +122 to +133
state.proxyStatus = "starting";
startProxy(state.proxyPort).then(() => {
state.captureStatus = "active";
state.proxyStatus = "running";
console.log("[AgentTap] Capture started on port", state.proxyPort);
buildTrayMenu();
}).catch((err: unknown) => {
state.captureStatus = "error";
state.proxyStatus = "error";
console.error("[AgentTap] Failed to start proxy:", err);
buildTrayMenu();
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This startup path can get stuck in starting.

This code assumes startProxy() rejects on failure, but src/bun/proxy/proxy-server.ts:49-103 only logs server errors and never rejects the startup promise. A bind failure leaves the tray in starting, and repeated clicks can schedule multiple starts while captureStatus is still "inactive".

Local guard to add here
-      state.proxyStatus = "starting";
-      startProxy(state.proxyPort).then(() => {
+      if (state.proxyStatus === "starting") return;
+      state.proxyStatus = "starting";
+      buildTrayMenu();
+      void startProxy(state.proxyPort).then(() => {
         state.captureStatus = "active";
         state.proxyStatus = "running";
         console.log("[AgentTap] Capture started on port", state.proxyPort);
         buildTrayMenu();
       }).catch((err: unknown) => {

This still needs startProxy() to reject listen / server startup errors instead of only logging them.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
state.proxyStatus = "starting";
startProxy(state.proxyPort).then(() => {
state.captureStatus = "active";
state.proxyStatus = "running";
console.log("[AgentTap] Capture started on port", state.proxyPort);
buildTrayMenu();
}).catch((err: unknown) => {
state.captureStatus = "error";
state.proxyStatus = "error";
console.error("[AgentTap] Failed to start proxy:", err);
buildTrayMenu();
});
if (state.proxyStatus === "starting") return;
state.proxyStatus = "starting";
buildTrayMenu();
void startProxy(state.proxyPort).then(() => {
state.captureStatus = "active";
state.proxyStatus = "running";
console.log("[AgentTap] Capture started on port", state.proxyPort);
buildTrayMenu();
}).catch((err: unknown) => {
state.captureStatus = "error";
state.proxyStatus = "error";
console.error("[AgentTap] Failed to start proxy:", err);
buildTrayMenu();
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/index.ts` around lines 122 - 133, Prevent stuck "starting" and
duplicate starts by first guarding in the start path: before calling startProxy,
check state.captureStatus (and/or state.proxyStatus) and return early if
captureStatus !== "inactive" or proxyStatus !== "idle" to avoid scheduling
multiple starts; then wrap the startProxy(state.proxyPort) call with a local
timeout watchdog that, if the start promise neither resolves nor rejects within
a short window (e.g., 10s), sets state.captureStatus = "error",
state.proxyStatus = "error", calls buildTrayMenu(), and clears the watchdog once
the promise settles; keep references to state.proxyStatus, state.captureStatus,
startProxy and buildTrayMenu so the fix is easy to locate.

Comment thread src/bun/index.ts
Comment on lines +188 to +195
traceEmitter.on("trace", (trace) => {
state.tracesCount++;
const tokens = trace.tokensIn || trace.tokensOut
? ` (${trace.tokensIn ?? "?"}→${trace.tokensOut ?? "?"} tokens)`
: "";
console.log(
`[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${trace.request.url} → ${trace.response.status} ${trace.durationMs}ms${tokens}`,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not log raw captured URLs.

trace.request.url can include sensitive query params. Writing it verbatim to console/system logs creates an avoidable data leak path.

Redact the query string before logging
   traceEmitter.on("trace", (trace) => {
     state.tracesCount++;
+    const safeUrl = (() => {
+      try {
+        const url = new URL(trace.request.url);
+        return `${url.origin}${url.pathname}`;
+      } catch {
+        return trace.request.url.split("?")[0];
+      }
+    })();
     const tokens = trace.tokensIn || trace.tokensOut
       ? ` (${trace.tokensIn ?? "?"}→${trace.tokensOut ?? "?"} tokens)`
       : "";
     console.log(
-      `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${trace.request.url} → ${trace.response.status} ${trace.durationMs}ms${tokens}`,
+      `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${safeUrl} → ${trace.response.status} ${trace.durationMs}ms${tokens}`,
     );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
traceEmitter.on("trace", (trace) => {
state.tracesCount++;
const tokens = trace.tokensIn || trace.tokensOut
? ` (${trace.tokensIn ?? "?"}${trace.tokensOut ?? "?"} tokens)`
: "";
console.log(
`[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${trace.request.url}${trace.response.status} ${trace.durationMs}ms${tokens}`,
);
traceEmitter.on("trace", (trace) => {
state.tracesCount++;
const safeUrl = (() => {
try {
const url = new URL(trace.request.url);
return `${url.origin}${url.pathname}`;
} catch {
return trace.request.url.split("?")[0];
}
})();
const tokens = trace.tokensIn || trace.tokensOut
? ` (${trace.tokensIn ?? "?"}${trace.tokensOut ?? "?"} tokens)`
: "";
console.log(
`[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${safeUrl}${trace.response.status} ${trace.durationMs}ms${tokens}`,
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bun/index.ts` around lines 188 - 195, The log is printing raw
trace.request.url (which may contain sensitive query params); update the trace
handler registered in traceEmitter.on("trace", ...) to redact the URL's query
string before logging: parse trace.request.url (using URL in a try/catch to
handle absolute URLs and fall back to stripping anything after '?' for
non-absolute values), replace the query portion with either an empty string or a
placeholder like "?REDACTED", and use that redactedUrl in the console.log call
while leaving the rest of the message (provider, model, method, status,
duration, tokens) unchanged.

@GeiserX GeiserX merged commit 7814b75 into main Apr 14, 2026
3 of 4 checks passed
@GeiserX GeiserX deleted the phase-1/mitm-proxy branch April 14, 2026 12:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant