Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title = "App Store Rejection: 'TLS error' in IPv6-only environments"
topics = [ "platform" ]
keywords = []
---

If your App Store submission is rejected with a 'TLS error' when tested in an IPv6-only environment, often citing a lack of AAAA records, it typically indicates application-level issues rather than a Supabase configuration problem.

## Why does this happen?

Supabase projects are designed for compatibility with IPv6-only NAT64/DNS64 environments through automatic IPv4-to-IPv6 translation. This means explicit AAAA records are not required for your `*.supabase.co` domain. The 'TLS error' usually points to how the application handles networking requests, which can interfere with this automatic translation.

## How to resolve this issue

- Ensure you're using hostnames, not IP addresses - Use project-ref.supabase.co everywhere in your code. See [Supporting IPv6 DNS64/NAT64 Networks](https://developer.apple.com/documentation/network/supporting_ipv6_dns64_nat64_networks)
- Use high-level networking APIs like `URLSession` that handle IPv6 automatically. See [`URLSession` Documentation](https://developer.apple.com/documentation/foundation/urlsession)
- Review your App Transport Security settings. See [Preventing Insecure Network Connections](https://developer.apple.com/documentation/security/preventing_insecure_network_connections)
- Test your app in an IPv6-only environment using Apple's Network Link Conditioner. See [Testing for IPv6 DNS64/NAT64 Compatibility](https://developer.apple.com/library/archive/documentation/NetworkingInternetWeb/Conceptual/NetworkingOverview/UnderstandingandPreparingfortheIPv6Transition/UnderstandingandPreparingfortheIPv6Transition.html#//apple_ref/doc/uid/TP40010220-CH213-SW1)
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title = "Auth Hooks: 'Invalid payload' when anonymous users attempt phone changes"
topics = [ "auth", "cli" ]
keywords = []
[[errors]]
http_status_code = 500
message = "Invalid payload sent to hook"

---

An 'Invalid payload sent to hook' error (500) occurs in Auth hooks when the payload includes `new_phone` for an anonymous user.

## Why does this happen?

This error arises because anonymous users do not have an existing phone number to modify. Client application logic attempting a `phone_change` for such users results in an invalid operation. The `new_phone` field should only be present during a `phone_change` flow initiated by an _authenticated_ user.

## How to avoid this issue

Refine your client application logic to prevent this incorrect payload structure:

- Differentiate phone update and login flows for anonymous users from authenticated users.
- Ensure `new_phone` is only transmitted when an authenticated user initiates a `phone_change` flow.
- Implement distinct handling for anonymous user updates to avoid sending `new_phone` in the payload.
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
title = "Autovacuum Stalled Due to Inactive Replication Slot"
topics = [ "database" ]
keywords = []
---

If you observe that `supabase inspect db vacuum-stats` reports "Expect autovacuum? yes" for your tables, but autovacuum activity has been inactive for an extended period, leading to increasing database RAM usage, this typically indicates a stalled autovacuum process. One of the reasons for autovacuum to get stalled is an inactive replication slot for which this guide talks about.

## Why does this happen?

Replication slots (logical or physical) tell Postgres “don’t remove WAL or older transaction state before this point” because a consumer/replica might still need those WAL records or visibility information. That means autovacuum will get slower, do more work, or appear to be stalled because it can't progress past the older snapshot anchored by the slot. Inactive logical replication slots can prevent the autovacuum process from running effectively. This stall prevents the cleanup of dead tuples, leading to database bloat and increased resource consumption.

## How to resolve this issue

1. **Identify inactive replication slots:**
Execute the following query in your [SQL editor](/dashboard/project/_/sql/new) to list all replication slots and their activity status:
```sql
select slot_name, slot_type, active, active_pid from pg_replication_slots where active is false;
```
2. **Drop inactive slot(s):**
For each `slot_name` identified as `active = f` (inactive), execute the following command. Replace `'slot_name'` with the actual name of the inactive slot (e.g., `'example_slot'`):
```sql
select pg_drop_replication_slot('slot_name');
```
3. **Confirm removal:**
Re-run the identification query from step 1 to verify that the inactive slot(s) have been successfully removed. Once removed, autovacuum should resume normal operation.
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
title = "'Cloudflare Origin Error 1016' on Custom Domain"
topics = [ "platform" ]
keywords = []
[[errors]]
code = "1016"
message = "Cloudflare Origin Error"
---

When encountering a 'Cloudflare Origin Error 1016' when accessing a custom domain URL, it indicates an SSL certificate validation failure. This error typically occurs because the custom domain's SSL certificate has expired, leading Cloudflare to deactivate routing to the origin server.

## How to resolve this issue

1. Navigate to your project's [custom domain settings](/dashboard/project/_/settings/custom-domain).
2. Initiate a DNS record re-verification. This action prompts an attempt to renew the SSL certificate.
3. If the error persists after re-verification, remove the custom domain configuration from your project.
4. Re-add the custom domain configuration. Ensure all DNS records are correctly established as instructed by the dashboard. This process forces a hard reset and triggers a new certificate request.
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title = "Error: 'invalid byte sequence for encoding 'UTF8': 0x00' when accessing Triggers or Webhooks"
topics = [ "cli", "database" ]
keywords = []
---

If you encounter the error: `'invalid byte sequence for encoding "UTF8": 0x00'` when attempting to access your project's [Triggers](/dashboard/project/_/database/triggers) or [Webhooks](/dashboard/project/_/database/webhooks) via the dashboard, it indicates that the `standard_conforming_strings` database setting is currently `off`.

This setting, when `off`, can cause issues with how certain character sequences are interpreted by Postgres, leading to errors in dashboard queries that expect UTF8-compliant strings.

To resolve this issue:

1. Connect to your database instance using the [SQL Editor](/dashboard/project/_/sql/new) in the Dashboard or a client like `psql`.
2. Execute the following SQL command:
```sql
ALTER DATABASE postgres SET standard_conforming_strings = on;
```
3. Allow a few minutes for this setting to take effect, as existing pooled connections might retain the previous configuration. If the error persists after this period, a database restart may be necessary.
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title = "'Get detailed Storage metrics with the AWS CLI'"
topics = [ "cli", "storage", "studio" ]
keywords = []
---

Supabase Studio primarily lists the current objects within your buckets. You can use standard S3 tooling such as the AWS CLI to review your Supabase project's Storage usage, or perform operations on the bucket contents.

## How to get detailed storage metrics

This guide makes use of the official [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). You need to install it locally on your computer in order to follow the next steps.

1. **Retrieve Credentials:** Generate an Access Key pair (Access key ID and Secret access key) and locate the Storage endpoint in your project's [Storage Configuration](/dashboard/project/_/storage/s3). Note that the Secret Access Key is only shown when creating a new Access Key.
2. **Identify the Endpoint and Region**: Find your project's Storage Endpoint and Region under the Connection section in [Storage Configuration](/dashboard/project/_/storage/s3)
3. **Configure AWS CLI:** You can configure access credentials with environment variables on your local terminal:
```bash
export AWS_ACCESS_KEY_ID='<access-key-id>'
export AWS_SECRET_ACCESS_KEY='<secret-access-key>'
export AWS_DEFAULT_REGION='<storage-region>'
```
4. **List Buckets:** Confirm your setup by listing your project's buckets:
```bash
aws s3api list-buckets --endpoint-url <storage-endpoint-url>
```
5. **Review Bucket contents and size:** Get a detailed view and sum of a specific bucket's contents:
```bash
aws s3 ls s3://<example-bucket>/ --endpoint-url <storage-endpoint-url> --recursive --human-readable --summarize
```

In the above commands make sure to replace `<example-bucket>` and `<storage-endpoint-url>` with the actual details of your bucket.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title = "How can I revoke execution of a PostgreSQL function?"
title = "How can I revoke execution of a Postgres function?"
github_url = "https://github.com/orgs/supabase/discussions/17606"
date_created = "2023-09-21T03:04:41+00:00"
topics = [ "database", "functions" ]
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
title = "'Manually created databases are not visible in the Supabase Dashboard'"
topics = [ "auth", "cli", "database", "functions", "platform", "storage" ]
keywords = []
---

If you've manually created an additional database within your Supabase project, such as `example_database`, you might observe that it's accessible via external database tools but is not visible in the Supabase Dashboard. This guide explains the underlying reasons for this behavior and how Supabase is designed to handle databases.

---

## Key concepts

Before diving into the problem, let's define some key terms:

- **What is a Postgres Cluster?**
In Postgres terminology, a "cluster" refers to a collection of databases managed by a single Postgres server instance. This single instance can host multiple independent databases, each with its own set of tables, users, and permissions. Every Supabase project runs on top of a full Postgres cluster.
- **What is a Supabase Project?**
A Supabase project is an integrated platform that includes a dedicated Postgres database, authentication services, storage, real-time capabilities, and more. Each project is configured to interact seamlessly with its primary database.
- **What is the Supabase Dashboard?**
The Supabase Dashboard is a web-based interface that provides a graphical way to manage your project's database schema, data, authentication rules, storage buckets, functions, and other Supabase services.
- **What is PostgREST?**
PostgREST is a standalone web server that automatically turns your Postgres database directly into a RESTful API. It generates API endpoints based on your database schema, allowing you to interact with your data via HTTP requests without writing custom backend code. Supabase leverages PostgREST to provide its powerful API layer.

---

## Understanding the problem: Multiple databases in Supabase

The core of this behavior stems from the distinction between how Postgres manages databases and how Supabase integrates with them.

- **Postgres's Flexibility:** As a standard Postgres cluster, your Supabase backend is inherently capable of hosting multiple databases. You can connect to your project and manually create additional databases, such as `example_database` or `another_database`, alongside the default `postgres` database. These manually created databases are fully functional Postgres databases, accessible via external tools like TablePlus, `psql`, or any other Postgres client, provided you use the correct connection parameters (e.g., `user=postgres.[your_project_slug] host=... port=... dbname=example_database`).

- **Supabase's Integrated Design:** While Postgres itself supports multiple databases per cluster, the Supabase platform, its Dashboard, and most of its integrated features (such as PostgREST for APIs, Supabase Auth, and Supabase Storage) are specifically engineered to operate exclusively with the project's **default `postgres` database**.

- **The Disconnect Explained:**
- **Dashboard Visibility:** The Supabase Dashboard is designed to manage and display information related solely to the `postgres` database within your project. Manually created databases (like `example_database`) will therefore not appear in the Dashboard's interface, as it's not configured to interact with them.
- **Service Integration:** Supabase's integrated services are tightly coupled to the `postgres` database. For example, the API layer powered by PostgREST is designed to expose schemas _from the `postgres` database_, not to manage or expose multiple separate databases. This architectural choice simplifies the platform's design, ensures consistent behavior, and allows Supabase to layer its features effectively on top of the `postgres` database. Supporting multiple databases for all integrated services would introduce significant complexity and fundamentally alter Supabase's current model.

---

## Resolution and best practices

To effectively utilize Supabase and its features, consider the following approaches:

- **For Supabase Feature Integration:** If you intend for your data to be managed via the Supabase Dashboard, accessed through auto-generated APIs (PostgREST), or integrated with Supabase's Authentication or Storage services, all your data and schemas **must reside within the project's default `postgres` database**. This is the designated database for full Supabase ecosystem compatibility.

- **When a Truly Separate, Integrated Database is Needed:** If your application architecture requires a logically separate database that also benefits from full Supabase feature integration (Dashboard visibility, APIs, Auth, etc.), the recommended approach is to **create a new Supabase project**. Each new project you create will come with its own dedicated Postgres cluster and its own default `postgres` database, fully integrated with all Supabase services. This ensures that each "database" managed by Supabase has its own isolated environment and complete feature support.

- **Using Manually Created Databases (with caveats):** Creating additional databases within a single Supabase project's Postgres cluster (e.g., `example_database`) is technically possible and accessible via external tools. However, this approach is generally suitable only if:
- You do not need these databases to be visible or managed through the Supabase Dashboard.
- You do not intend to use Supabase's integrated services (like PostgREST, Auth, or Storage) with these specific databases.
- You are comfortable managing these databases entirely through direct Postgres client connections, essentially treating them as standard Postgres databases within the Supabase-provided cluster, but operating outside the Supabase platform's feature set.
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title = "`pg_cron launcher crashes with 'duplicate key value violates unique constraint'`"
topics = [ "platform" ]
keywords = []
---

The `pg_cron` launcher process crashes approximately every minute, displaying the error `'duplicate key value violates unique constraint "job_run_details_pkey"'`.

## Why does this happen?

This issue occurs when the `cron.runid_seq` sequence is out of sync with the `cron.job_run_details` table. The sequence attempts to generate `runid` values that already exist in the table, leading to a unique key violation. This is typically due to the sequence's last value not being correctly aligned with the maximum `runid` already present in the table.

## How to resolve this

To resolve this, you need to reset the `cron.job_run_details` table. If data preservation is required, ensure you back up its contents before proceeding.

Execute the following SQL command via the [SQL editor](/dashboard/project/_/sql/new):
`TRUNCATE cron.job_run_details;`
Loading
Loading