Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 24 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,18 +54,26 @@ Then edit the `.env` file to set your specific configuration values.

### Starting the services

You can start the services in two ways, depending on your environment:
You can start the services in different ways, depending on your environment:

#### 1. Development
#### 1. Local Development

Use `docker-compose.local.yml` for local development overrides:

```bash
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d
```

#### 2. Development (Default)

```bash
docker compose up -d
```

#### 2. Production
#### 3. Production

```bash
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
```

### Building custom images
Expand All @@ -91,6 +99,7 @@ docker compose ps
```

You should see containers for:

- n8n
- temporal
- temporal-ui
Expand All @@ -106,13 +115,15 @@ scripts/check_services.sh
```

This will check:

- n8n health endpoint
- Temporal UI web interface
- OpenSearch API
- Temporal server gRPC port
- PostgreSQL database connection

Example output:

```text
Checking service availability...
Checking n8n at http://localhost:5678/healthz... ACCESSIBLE ✅ (HTTP 200)
Expand Down Expand Up @@ -158,11 +169,15 @@ docker compose down -v

Data for all services is persisted using Docker volumes. The storage location depends on the environment:

- **Local Development (using `docker-compose.local.yml`)**: Local-specific overrides and configurations. Must be explicitly specified with `-f` flag. This file is not automatically loaded.

- **Development (default, using `docker-compose.yml`)**: Docker uses anonymous volumes for each service. These are managed by Docker and are not bound to any directory in your project. Data persists as long as the volume exists, but is not directly accessible from the project folder.

- **Production (using `docker-compose.prod.yml`)**: Volumes are explicitly bound to host directories under `/data/` for persistent storage and easier backup/restore.

> **Note:**
>
> - `docker-compose.local.yml` is only used when explicitly specified with `-f` flag
> - Removing volumes with `docker compose down -v` will delete all persisted data.

## Service Ports
Expand All @@ -178,6 +193,7 @@ Data for all services is persisted using Docker volumes. The storage location de
If you encounter any issues:

1. Check container logs:

```bash
docker logs temporal
docker logs automatization-n8n-1
Expand All @@ -192,16 +208,19 @@ If you encounter any issues:
To use GitHub-related functions with Cursor's Model Context Protocol (MCP), you need to configure a GitHub Personal Access Token:

1. Create the secrets directory if it doesn't exist:

```bash
mkdir -p ~/.cursor/mcp
```

2. Copy or edit the `.env` file in this directory:

```bash
cp mcp.env.example ~/.cursor/mcp/.env
```

3. Update your GitHub Personal Access Token to the `~/.cursor/mcp/.env`:

```
GITHUB_PERSONAL_ACCESS_TOKEN=your_token_here
```
Expand All @@ -211,6 +230,7 @@ To use GitHub-related functions with Cursor's Model Context Protocol (MCP), you
To get access to a GitHub Personal Access Token:
Ask @killev
or

1. Go to GitHub Settings > Developer settings > Personal access tokens
2. Generate a new token with appropriate permissions (repo, workflow, etc.)
3. Copy the token and add it to the `.env` file as shown above
38 changes: 38 additions & 0 deletions docker-compose.local.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
services:
redmine-tunnel:
container_name: redmine-tunnel
image: alpine:latest
command: >
sh -c "apk add --no-cache openssh &&
ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
volumes:
- ~/.ssh:/root/.ssh:ro
ports:
- '3306:3306'
networks:
- app-network
environment:
- SSH_KEY=/root/.ssh/id_rsa
Comment on lines +2 to +15
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Hardcoded infrastructure endpoints and credentials exposed.

Lines 7 and throughout the redmine-tunnel service expose sensitive infrastructure details that must never be committed:

  • Hardcoded staging hostname: ubuntu@staging.forecasting-v2.gluzdov.com
  • Hardcoded AWS RDS endpoint: redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com
  • Hardcoded internal IP: 10.4.3.184:31000

These represent production/staging infrastructure and should be externalized to environment variables or a .env.local file that is excluded from version control.

Refactor to use environment variable substitution:

 services:
   redmine-tunnel:
     container_name: redmine-tunnel
     image: alpine:latest
     command: >
       sh -c "apk add --no-cache openssh && 
-             ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
+             ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ${REDMINE_SSH_USER}@${REDMINE_SSH_HOST} -N -L 0.0.0.0:3306:${REDMINE_DB_HOST}:3306 -L 0.0.0.0:31000:${REDMINE_FORWARD_HOST}:31000"

Ensure .env.local is added to .gitignore and create a .env.local.example template with placeholder values.

🤖 Prompt for AI Agents
In docker-compose.local.yml around lines 2 to 15 the redmine-tunnel service
contains hardcoded sensitive endpoints (staging SSH host, RDS endpoint and
internal IP/port); replace those literal values with environment-variable
placeholders (e.g. ${SSH_REMOTE_USER}@${SSH_REMOTE_HOST}, ${RDS_HOST},
${INTERNAL_IP}:${INTERNAL_PORT}) and reference them via docker-compose variable
substitution, ensure the SSH key path is configurable (e.g. ${SSH_KEY_PATH})
rather than hardcoded, and update the compose file to read from a .env.local
file; add .env.local to .gitignore and create a .env.local.example with
placeholder values for all new variables so no secrets are committed.

Comment on lines +6 to +15
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Major: SSH security misconfigurations and unused environment variable.

Multiple SSH-related issues:

  1. Line 6: StrictHostKeyChecking=no disables host key verification, bypassing critical SSH security. This should only be used with extreme caution and be justified in comments.

  2. Line 15: SSH_KEY environment variable is defined but never used in the actual SSH command (line 7). The command hardcodes /root/.ssh/id_rsa instead. This inconsistency suggests incomplete refactoring or dead code.

  3. Line 7: SSH key path is hardcoded to /root/.ssh/id_rsa, assuming this specific filename. Consider using the SSH_KEY env var consistently or making it configurable.

Apply this diff to fix the inconsistency:

   redmine-tunnel:
     container_name: redmine-tunnel
     image: alpine:latest
     command: >
       sh -c "apk add --no-cache openssh && 
-             ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@staging.forecasting-v2.gluzdov.com -N -L 0.0.0.0:3306:redmine-pr-rds-db-read.c1kaki1qbk4o.us-east-1.rds.amazonaws.com:3306 -L 0.0.0.0:31000:10.4.3.184:31000"
+             ssh -o StrictHostKeyChecking=accept-new -i ${SSH_KEY} ${REDMINE_SSH_USER}@${REDMINE_SSH_HOST} -N -L 0.0.0.0:3306:${REDMINE_DB_HOST}:3306 -L 0.0.0.0:31000:${REDMINE_FORWARD_HOST}:31000"
     volumes:
       - ~/.ssh:/root/.ssh:ro
     ports:
       - '3306:3306'
     networks:
       - app-network
     environment:
       - SSH_KEY=/root/.ssh/id_rsa

Consider StrictHostKeyChecking=accept-new as a safer alternative to no, which accepts new keys but still validates known hosts.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docker-compose.local.yml around lines 6 to 15, the SSH command disables
host-key verification and hardcodes the private key path while the SSH_KEY env
var is unused; replace StrictHostKeyChecking=no with
StrictHostKeyChecking=accept-new (or add a brief comment if no change is
allowed), remove the hardcoded /root/.ssh/id_rsa in the ssh command and
reference the SSH_KEY environment variable instead (or remove SSH_KEY if you
prefer a fixed path), and ensure the mounted volume and permissions match the
chosen key path; keep the SSH_KEY env var consistent with the command or delete
it to avoid dead configuration.


mongo-tunnel:
container_name: mongo-tunnel
image: alpine:latest
command: >
sh -c "apk add --no-cache openssh &&
ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa ubuntu@forecasting-v2.gluzdov.com -N -L 0.0.0.0:31000:10.4.3.184:31000"
volumes:
- ~/.ssh:/root/.ssh:ro
ports:
- '31000:31000'
networks:
- app-network
environment:
- SSH_KEY=/root/.ssh/id_rsa
Comment on lines +17 to +30
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical & Major: Hardcoded infrastructure endpoints and SSH misconfigurations (same as redmine-tunnel).

The mongo-tunnel service contains identical issues:

  • Line 22: Hardcoded hostname ubuntu@forecasting-v2.gluzdov.com and internal IP 10.4.3.184
  • Line 22: StrictHostKeyChecking=no disables host key verification
  • Line 30: SSH_KEY env var defined but unused; command hardcodes /root/.ssh/id_rsa

Apply the same refactoring as redmine-tunnel to externalize sensitive data and fix SSH security configuration.

🤖 Prompt for AI Agents
In docker-compose.local.yml around lines 17 to 30, the mongo-tunnel service
hardcodes the remote user/host and internal IP, disables host key verification,
and declares SSH_KEY env var but never uses it; replace the hardcoded values
with environment variables (e.g. MONGO_SSH_USER, MONGO_SSH_HOST,
MONGO_TARGET_HOST, MONGO_TARGET_PORT) and use the declared SSH_KEY variable as
the identity file path in the ssh command instead of a hardcoded path; remove
StrictHostKeyChecking=no and ensure host verification by either mounting a
known_hosts file or running ssh-keyscan at container startup to populate
/root/.ssh/known_hosts; ensure the private key is mounted read-only from the
host and referenced via the SSH_KEY env var so no secrets are baked into the
image or compose file.


temporal-worker-main:
env_file:
- .env
extra_hosts:
- 'mongo1:host-gateway'
- 'mongo2:host-gateway'
- 'mongo3:host-gateway'
4 changes: 4 additions & 0 deletions workers/main/src/index.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ describe('handleRunError', () => {
let processExitSpy: ReturnType<typeof vi.spyOn>;

beforeEach(() => {
vi.useFakeTimers();

Comment on lines +9 to +10
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify timer usage in handleRunError implementation.

Fake timers are configured but the tests don't explicitly advance or control them. If handleRunError uses timers (e.g., delayed process.exit), the tests should verify that behavior with vi.advanceTimersByTime() or vi.runAllTimers(). If it doesn't use timers, this setup adds unnecessary overhead.

Run the following script to check if handleRunError uses timer functions:


🏁 Script executed:

#!/bin/bash
# Description: Check if handleRunError implementation uses setTimeout, setInterval, or other timer functions

# Search for handleRunError implementation and timer usage
ast-grep --pattern 'function handleRunError($$$) {
  $$$
}'

# Also search for timer-related calls in the index.ts file
rg -n 'setTimeout|setInterval|clearTimeout|clearInterval' workers/main/src/index.ts

Length of output: 44


I need to check the actual test implementations to see if they're advancing timers or if they're missing this critical step.


🏁 Script executed:

#!/bin/bash
# View the handleRunError implementation and surrounding context
cat workers/main/src/index.ts | head -60 | tail -20

echo "---"

# View the full test file to see if timers are being advanced
cat workers/main/src/index.test.ts

Length of output: 1929


Tests must verify the 100ms process.exit behavior when fake timers are enabled.

The test setup enables fake timers and mocks process.exit, but the tests never advance timers to trigger the setTimeout callback. Without calling vi.advanceTimersByTime(100) or vi.runAllTimers(), the process.exit(1) is never invoked and remains untested. Each test should advance timers after calling handleRunError() and verify that process.exit(1) was called.

Example fix:

it('should exit with code 1 after 100ms', () => {
  const error = new Error('test error');
  const logSpy = vi.spyOn(logger, 'error').mockImplementation(() => {});

  handleRunError(error);
  expect(logSpy).toHaveBeenCalledWith(
    `Error in main worker: ${error.message}`,
  );
  
  vi.advanceTimersByTime(100);
  expect(processExitSpy).toHaveBeenCalledWith(1);
  
  logSpy.mockRestore();
});
🤖 Prompt for AI Agents
In workers/main/src/index.test.ts around lines 9-10, tests enable fake timers
but never advance them so the 100ms setTimeout that calls process.exit(1) is
never executed; update each relevant test to advance timers (e.g.,
vi.advanceTimersByTime(100) or vi.runAllTimers()) after invoking
handleRunError(), then assert that the mocked process.exit was called with 1,
and restore any spies/mocks.

// Mock process.exit to prevent actual process termination during tests
processExitSpy = vi.spyOn(process, 'exit').mockImplementation(() => {
throw new Error('process.exit called');
Expand All @@ -14,6 +16,8 @@ describe('handleRunError', () => {

afterEach(() => {
processExitSpy.mockRestore();
vi.clearAllTimers();
vi.useRealTimers();
});

it('should log the error', () => {
Expand Down
Loading