Description
Hey guys 👋 — first off, loving the product!
I had a question that evolved into a potential feature request:
Is it currently possible to provide the S3 endpoint for pgBackRest WAL backups via a Kubernetes Secret (e.g., as a secretRef) rather than specifying it directly in the PostgresCluster manifest?
After some helpful discussion with @andrewlecuyer, it became clear that:
- Removing the s3 section from the PostgresCluster spec and instead defining S3 settings in a secret (e.g. repo1-s3-endpoint, etc.) is partially supported.
- However, if the s3 section is removed, the configMap that would normally be created for pgBackRest doesn't get generated, causing related pods to hang waiting for a missing volume mount.
So creating the configMap manually isn't an option.
It would be great if we could keep the S3 endpoint private, especially when it contains sensitive identifiers like a Cloudflare account ID. While it’s not a critical security leak, it’s still information we’d prefer not to expose in plaintext manifests.
See here:
Ideally, patching the schema to allow a secretRef to provide those values would be the solution.
Would it be possible to enhance PostgresCluster to support this workflow officially?
Thanks again 🙏
Overview
Environment
Please provide the following details:
- Platform: (
Kubernetes
) - Platform Version: (v1.32.2)
- PGO Image Tag: (
ubi8-15.10-2-v0.3.0
) - Postgres Version (
15
) - Storage: (rook-ceph)
Steps to Reproduce
Create a PostgresCluster with minimal without the s3 configuration and provide it via a secretRef.
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: ${APP}
spec:
postgresVersion: 15
instances:
[...]
backups:
pgbackrest:
configuration:
- secret:
name: "crunchy"
global:
repo1-block: "y"
[...]
repos:
- name: repo1
EXPECTED
- Deployment to consume the secret and create the configMap
ACTUAL
- Pod hangs because configMap is not created.