You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -4,38 +4,142 @@ description: "You can self-host Trigger.dev on your own infrastructure using Doc
4
4
tag: "v4"
5
5
---
6
6
7
-
## Introduction
8
-
9
-
This guide will use docker compose to spin up a Trigger.dev instance. Make sure to read the [overview](/self-hosting/overview) first.
10
-
11
-
We've split the compose files into Webapp and Worker components so you can easily run them independently. This will also allow you to scale your workers as needed.
12
-
13
-
**Warning:** This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.
14
-
15
-
## Caveats
7
+
The following instructions will use docker compose to spin up a Trigger.dev instance. Make sure to read the self-hosting [overview](/self-hosting/overview) first.
16
8
17
9
As self-hosted deployments tend to have unique requirements and configurations, we don't provide specific advice for securing your deployment, scaling up, or improving reliability.
18
10
19
11
Should the burden ever get too much, we'd be happy to see you on [Trigger.dev cloud](https://trigger.dev/pricing) where we deal with these concerns for you.
20
12
21
-
## Comparison with v3
13
+
**Warning:** This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.
14
+
15
+
## What's new?
22
16
23
-
We made quite a few changes:
17
+
Goodbye v3, hello v4! We made quite a few changes:
18
+
-**Much simpler setup.** Provider + coordinator = supervisor. No more startup scripts. Just `docker compose up`.
24
19
-**Support for multiple worker machines.** This is a big one, and we're very excited about it! You can now scale your workers horizontally as needed.
25
-
-**Resource limits enforced by default.** This means that tasks will be limited to the total CPU and RAM of the machine, preventing noisy neighbours.
26
-
-**No direct Docker socket access.** The compose file now comes with [Docker Socket Proxy](https://github.com/Tecnativa/docker-socket-proxy) by default.
20
+
-**Resource limits enforced by default.** This means that tasks will be limited to the total CPU and RAM of the machine preset, preventing noisy neighbours.
21
+
-**No direct Docker socket access.** The compose file now comes with [Docker Socket Proxy](https://github.com/Tecnativa/docker-socket-proxy) by default. Yes, you want this.
27
22
-**No host networking.** All containers are now running with network isolation, using only the network access they need.
28
-
-**No checkpoint support.** This was only ever an experimental feature and not recommended, it caused a bunch of issues. We decided to focus on the core features and remove it.
23
+
-**No checkpoint support.** This was only ever an experimental self-hosting feature and not recommended. It caused a bunch of issues. We decided to focus on the core features instead.
29
24
-**Built-in container registry and object storage.** You can now deploy and execute tasks without needing third party services for this.
30
25
-**Improved CLI commands.** You don't need any additional flags to deploy anymore, and there's a new `switch` command to easily switch between profiles.
26
+
-**Whitelisting for GitHub OAuth.** Any whitelisted email addresses will now also apply to sign ins via GitHub, unlike v3 where they only applied to magic links.
27
+
28
+
## Requirements
29
+
30
+
These are the minimum requirements for running the webapp and worker components. They can run on the same, or on separate machines.
31
+
32
+
It's fine to run everything on the same machine for testing. To be able to scale your workers, you will want to run them separately.
33
+
34
+
### Prerequisites
35
+
36
+
To run the webapp and worker components, you will need:
This will host the webapp, postgres, redis, and related services.
44
+
45
+
- 2+ vCPU
46
+
- 4+ GB RAM
47
+
48
+
### Worker
49
+
50
+
This will host the supervisor and all of the runs.
51
+
52
+
- 2+ vCPU
53
+
- 4+ GB RAM
54
+
55
+
How many workers and resources you need will depend on your workloads and concurrency requirements.
56
+
57
+
For example:
58
+
59
+
- 10 concurrency x `small-1x` (0.5 vCPU, 0.5 GB RAM) = 5 vCPU and 5 GB RAM
60
+
- 20 concurrency x `small-1x` (0.5 vCPU, 0.5 GB RAM) = 10 vCPU and 10 GB RAM
61
+
- 100 concurrency x `small-1x` (0.5 vCPU, 0.5 GB RAM) = 50 vCPU and 50 GB RAM
62
+
- 100 concurrency x `small-2x` (1 vCPU, 1 GB RAM) = 100 vCPU and 100 GB RAM
63
+
64
+
You may need to spin up multiple workers to handle peak concurrency. The good news is you don't have to know the exact numbers upfront. You can start with a single worker and add more as needed.
0 commit comments