Skip to content

Commit 82dec6d

Browse files
authored
Merge branch 'main' into fix/feb-miscellaneous-fixes
2 parents 03dde20 + c059570 commit 82dec6d

File tree

6 files changed

+125
-3
lines changed

6 files changed

+125
-3
lines changed

docs/deploy-environment-variables.mdx

Lines changed: 64 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,18 @@ You can edit an environment variable's values. You cannot edit the key name, you
6666

6767
</Steps>
6868

69+
## Local development
70+
71+
When running `npx trigger.dev dev`, the CLI automatically loads environment variables from these files in order (later files override any duplicate keys from earlier ones):
72+
73+
- `.env`
74+
- `.env.development`
75+
- `.env.local`
76+
- `.env.development.local`
77+
- `dev.vars`
78+
79+
These variables are available to your tasks via `process.env`. You don't need to use the `--env-file` flag for this automatic loading.
80+
6981
## In your code
7082

7183
You can use our SDK to get and manipulate environment variables. You can also easily sync environment variables from another service into Trigger.dev.
@@ -360,4 +372,55 @@ This will read your .env.production file using dotenvx and sync the variables to
360372

361373
- Trigger.dev does not automatically detect .env.production or dotenvx files
362374
- You can paste them manually into the dashboard
363-
- Or sync them automatically using a build extension
375+
- Or sync them automatically using a build extension
376+
377+
## Multi-tenant applications
378+
379+
If you're building a multi-tenant application where each tenant needs different environment variables (like tenant-specific API keys or database credentials), you don't need a separate project for each tenant. Instead, use a single project and load tenant-specific secrets at runtime.
380+
381+
<Note>
382+
This is different from [syncing environment variables at deploy time](#sync-env-vars-from-another-service).
383+
Here, secrets are loaded dynamically during task execution, not synced to Trigger.dev's environment variables.
384+
</Note>
385+
386+
### Recommended approach
387+
388+
Use a secrets service (Infisical, AWS Secrets Manager, HashiCorp Vault, etc.) to store tenant-specific secrets, then retrieve them at the start of each task run based on the tenant identifier in your payload or context.
389+
390+
**Important:** Never pass secrets in the task payload, as payloads are logged and visible in the dashboard.
391+
392+
### Example implementation
393+
394+
```ts
395+
import { task } from "@trigger.dev/sdk";
396+
import { SecretsManagerClient, GetSecretValueCommand } from "@aws-sdk/client-secrets-manager";
397+
398+
export const processTenantData = task({
399+
id: "process-tenant-data",
400+
run: async (payload: { tenantId: string; data: unknown }) => {
401+
// Retrieve tenant-specific secret at runtime
402+
const client = new SecretsManagerClient({ region: "us-east-1" });
403+
const response = await client.send(
404+
new GetSecretValueCommand({
405+
SecretId: `tenants/${payload.tenantId}/supabase-key`,
406+
})
407+
);
408+
409+
const supabaseKey = JSON.parse(response.SecretString!).SUPABASE_SERVICE_KEY;
410+
411+
// Your task logic using the tenant-specific secret
412+
// ...
413+
},
414+
});
415+
```
416+
417+
You can use any secrets service - see the [sync env vars section](#sync-env-vars-from-another-service) for an example with Infisical.
418+
419+
### Benefits
420+
421+
- **Single codebase** - Deploy once, works for all tenants
422+
- **Secure** - Secrets never appear in payloads or logs
423+
- **Scalable** - No project limit constraints
424+
- **Flexible** - Easy to add new tenants without redeploying
425+
426+
This approach allows you to support unlimited tenants with a single Trigger.dev project, avoiding the [project limit](/limits#projects) while maintaining security and separation of tenant data.

docs/idempotency.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -428,18 +428,22 @@ export const parentTask = task({
428428
});
429429
```
430430

431-
When resetting from outside a task (e.g., from your backend code), you must provide the `parentRunId`:
431+
When resetting from outside a task, you must provide the `parentRunId` if the key was created within a task context:
432432

433433
```ts
434434
import { idempotencyKeys } from "@trigger.dev/sdk";
435435

436-
// From your backend code - you need to know the parent run ID
436+
// If the key was created within a task, you need the parent run ID
437437
await idempotencyKeys.reset("my-task", "my-key", {
438438
scope: "run",
439439
parentRunId: "run_abc123"
440440
});
441441
```
442442

443+
<Note>
444+
If you triggered the task from backend code, all scopes behave as global (see [Triggering from backend code](#triggering-from-backend-code)). Use `scope: "global"` when resetting.
445+
</Note>
446+
443447
### Resetting attempt-scoped keys
444448

445449
Keys created with `"attempt"` scope include both the parent run ID and attempt number. When resetting from outside a task, you must provide both:

docs/limits.mdx

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,14 @@ If you add them [dynamically using code](/management/schedules/create) make sure
5555

5656
If you're creating schedules for your user you will definitely need to request more schedules from us.
5757

58+
## Projects
59+
60+
| Pricing tier | Limit |
61+
| :----------- | :----------------- |
62+
| All tiers | 10 per organization |
63+
64+
Each project receives its own concurrency allocation. If you need to support multiple tenants with the same codebase but different environment variables, see the [Multi-tenant applications](/deploy-environment-variables#multi-tenant-applications) section for a recommended workaround.
65+
5866
## Preview branches
5967

6068
| Pricing tier | Limit |

docs/self-hosting/docker.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -354,6 +354,8 @@ TRIGGER_IMAGE_TAG=v4.0.0
354354
docker compose logs -f webapp
355355
```
356356
357+
- **Deploy fails with `ERROR: schema "graphile_worker" does not exist`.** This error occurs when Graphile Worker migrations fail to run during webapp startup. Check the webapp logs for certificate-related errors like `self-signed certificate in certificate chain`. This is often caused by PostgreSQL SSL certificate issues when using an external PostgreSQL instance with SSL enabled. Ensure that both the webapp and supervisor containers have access to the same CA certificate used by your PostgreSQL instance. You can configure this by mounting the certificate file and setting the `NODE_EXTRA_CA_CERTS` environment variable to point to the certificate path. Once the certificate issue is resolved, the migrations will complete and create the required `graphile_worker` schema.
358+
357359
## CLI usage
358360
359361
This section highlights some of the CLI commands and options that are useful when self-hosting. Please check the [CLI reference](/cli-introduction) for more in-depth documentation.

docs/self-hosting/kubernetes.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -555,6 +555,7 @@ kubectl delete namespace trigger
555555
- **Deploy fails**: Verify registry access and authentication
556556
- **Pods stuck pending**: Describe the pod and check the events
557557
- **Worker token issues**: Check webapp and supervisor logs for errors
558+
- **Deploy fails with `ERROR: schema "graphile_worker" does not exist`**: See the [Docker troubleshooting](/self-hosting/docker#troubleshooting) section for details on resolving PostgreSQL SSL certificate issues that prevent Graphile Worker migrations.
558559

559560
See the [Docker troubleshooting](/self-hosting/docker#troubleshooting) section for more information.
560561

docs/troubleshooting.mdx

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,10 @@ Your code is deployed separately from the rest of your app(s) so you need to mak
151151

152152
Prisma uses code generation to create the client from your schema file. This means you need to add a bit of config so we can generate this file before your tasks run: [Read the guide](/config/extensions/prismaExtension).
153153

154+
### Database connection requires IPv4
155+
156+
Trigger.dev currently only supports IPv4 database connections. If your database provider only provides an IPv6 connection string, you'll need to use an IPv4 address instead. [Upvote IPv6 support](https://triggerdev.featurebase.app/p/support-ipv6-database-connections).
157+
154158
### `Parallel waits are not supported`
155159

156160
In the current version, you can't perform more that one "wait" in parallel.
@@ -171,12 +175,52 @@ The most common situation this happens is if you're using `Promise.all` around s
171175

172176
Make sure that you always use `await` when you call `trigger`, `triggerAndWait`, `batchTrigger`, and `batchTriggerAndWait`. If you don't then it's likely the task(s) won't be triggered because the calling function process can be terminated before the networks calls are sent.
173177

178+
### `COULD_NOT_FIND_EXECUTOR`
179+
180+
If you see a `COULD_NOT_FIND_EXECUTOR` error when triggering a task, it may be caused by dynamically importing the child task. When tasks are dynamically imported, the executor may not be properly registered.
181+
182+
Use a top-level import instead:
183+
184+
```ts
185+
import { myChildTask } from "~/trigger/my-child-task";
186+
187+
export const myTask = task({
188+
id: "my-task",
189+
run: async (payload: string) => {
190+
await myChildTask.trigger({ payload: "data" });
191+
},
192+
});
193+
```
194+
195+
Alternatively, use `tasks.trigger()` or `batch.triggerAndWait()` without importing the task:
196+
197+
```ts
198+
import { batch } from "@trigger.dev/sdk";
199+
200+
export const myTask = task({
201+
id: "my-task",
202+
run: async (payload: string) => {
203+
await batch.triggerAndWait([{ id: "my-child-task", payload: "data" }]);
204+
},
205+
});
206+
```
207+
174208
### Rate limit exceeded
175209

176210
<RateLimitHitUseBatchTrigger />
177211

178212
View the [rate limits](/limits) page for more information.
179213

214+
### Runs waiting in queue due to concurrency limits
215+
216+
If runs are staying in the `QUEUED` state for extended periods, check your concurrency usage in the dashboard. Review how many runs are `EXECUTING` or `DEQUEUED` (these count against limits) and check if any runs are stuck in `EXECUTING` state, as they may be blocking new runs.
217+
218+
**Solutions:**
219+
220+
- **Increase concurrency limits** - If you're on a paid plan, increase your environment concurrency limit via the dashboard
221+
- **Review queue concurrency limits** - Check if individual queues have restrictive `concurrencyLimit` settings
222+
- **Check for stuck runs** - See if stalled runs are blocking new executions
223+
180224
### `Crypto is not defined`
181225

182226
This can happen in different situations, for example when using plain strings as idempotency keys. Support for `Crypto` without a special flag was added in Node `v19.0.0`. You will have to upgrade Node - we recommend even-numbered major releases, e.g. `v20` or `v22`. Alternatively, you can switch from plain strings to the `idempotencyKeys.create` SDK function. [Read the guide](/idempotency).

0 commit comments

Comments
 (0)