Description
Provide environment information
System:
OS: Linux 6.5 Ubuntu 22.04.4 LTS 22.04.4 LTS (Jammy Jellyfish)
CPU: (4) x64 unknown
Memory: 14.89 GB / 19.34 GB
Container: Yes
Shell: 5.1.16 - /bin/bash
Binaries:
Node: 22.6.0 - ~/.nvm/versions/node/v22.6.0/bin/node
npm: 10.8.2 - ~/.nvm/versions/node/v22.6.0/bin/npm
bun: 1.1.22 - ~/.bun/bin/bun
Describe the bug
I'm using a self-hosted Trigger.dev stack deployed via Docker following the instructions [here](https://trigger.dev/docs/open-source-self-hosting) and [triggerdotdev/docker](https://github.com/triggerdotdev/docker).
The issue I noticed is that over 9000+ containers with names starting with task-run
accumulate overnight, all with the exited
status. I assume that Trigger.dev runs tasks in separate containers, but these containers are not being cleaned up automatically.
Steps to Reproduce:
- Deploy Trigger.dev self-hosted using the official Docker setup.
- Run tasks continuously for an extended period (e.g., overnight).
- Observe the accumulation of containers named
task-run*
with anexited
status.
Expected Behavior:
- Containers used for tasks should be cleaned up automatically after execution.
- Exited containers should not accumulate indefinitely.
Observed Behavior:
- Over 9000+ containers with the prefix
task-run
appear in the Docker environment overnight. - These containers are all in the
exited
state, consuming resources and requiring manual cleanup.
Environment Details:
- Trigger.dev version: Latest (as of 16 December 2024)
- Deployment method: Self-hosted via Docker ([triggerdotdev/docker](https://github.com/triggerdotdev/docker))
- Container environment: LXC on Proxmox
- Docker Version::
Client: Docker Engine - Community
Version: 27.0.3
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.15.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.28.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 9410
Running: 12
Paused: 0
Stopped: 9398
Images: 16
Server Version: 27.0.3
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: true
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.5.11-4-pve
Operating System: Ubuntu 22.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 19.34GiB
Name: bb
ID: 9b2f9f0c-244f-457d-9df9-cf75116be946
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: *
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional Information:
- Is there an existing configuration to auto-remove containers after task execution (e.g.,
--rm
flag)? - Could this be related to how Trigger.dev handles task container lifecycles?
- Are there any recommendations or scripts for automatically cleaning up exited containers?
Temporary Workaround:
Manually running:
docker container prune
This removes all exited containers, but it's not a long-term solution.
Thank you for your support! Any guidance on resolving this container accumulation issue would be greatly appreciated.
Reproduction repo
https://github.com/triggerdotdev/docker
To reproduce
- Deploy Trigger.dev self-hosted using the official Docker setup.
- Run tasks continuously for an extended period (e.g., overnight).
- Observe the accumulation of containers named
task-run*
with anexited
status.