Description
Please, answer some short questions which should help us to understand your problem / question better?
-
Which image of the operator are you using?
image: registry.opensource.zalan.do/acid/postgres-operator:v1.5.0 -
Where do you run it - cloud or metal? Kubernetes or OpenShift?
AWS. Openshift 4.5 -
Are you running Postgres Operator in production?
no -
Type of issue?
question:
Hello team, I can't understand how to enable WAL S3 archiving and didn't find straight forward documentation for it.
Here is my steps:
I added into operatorconfigurations.acid.zalan.do this lines:
configuration:
aws_or_gcp:
aws_endpoint: s3.eu-west-1.amazonaws.com
aws_region: eu-west-1
wal_s3_bucket: zalando-bucketname
here is full output for my operatorconfgurations:
apiVersion: acid.zalan.do/v1 [186/1894]
configuration:
aws_or_gcp:
aws_endpoint: s3.eu-west-1.amazonaws.com
aws_region: eu-west-1
wal_s3_bucket: zalando-bucketname
connection_pooler:
connection_pooler_default_cpu_limit: "1"
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_limit: 100Mi
connection_pooler_default_memory_request: 100Mi
connection_pooler_image: registry.opensource.zalan.do/acid/pgbouncer:master-7
connection_pooler_mode: transaction
connection_pooler_number_of_instances: 2
debug:
debug_logging: true
enable_database_access: true
docker_image: registry.opensource.zalan.do/acid/spilo-12:1.6-p3
etcd_host: ""
kubernetes:
cluster_domain: cluster.local
cluster_labels:
application: spilo
cluster_name_label: cluster-name
enable_init_containers: true
enable_pod_antiaffinity: false
enable_pod_disruption_budget: true
enable_sidecars: true
master_pod_move_timeout: 20m
oauth_token_secret_name: postgresql-operator
pdb_name_format: postgres-{cluster}-pdb
pod_antiaffinity_topology_key: kubernetes.io/hostname
pod_environment_configmap: zalando-operator/env
pod_management_policy: ordered_ready
pod_role_label: spilo-role
pod_service_account_name: postgres-pod
pod_terminate_grace_period: 1m
secret_name_template: '{username}.{cluster}.credentials.{tprkind}.{tprgroup}'
spilo_privileged: false
kubernetes_use_configmaps: true
load_balancer:
enable_master_load_balancer: false
enable_replica_load_balancer: false
master_dns_name_format: '{cluster}.{team}.{hostedzone}'
replica_dns_name_format: '{cluster}-repl.{team}.{hostedzone}'
logging_rest_api:
api_port: 8080
cluster_history_entries: 1000
ring_log_lines: 100
logical_backup:
logical_backup_docker_image: registry.opensource.zalan.do/acid/logical-backup:master-58
logical_backup_s3_access_key_id: ******************
logical_backup_s3_bucket: zalando-backup-bucketname
logical_backup_s3_secret_access_key: *********************************
logical_backup_s3_sse: AES256
logical_backup_schedule: '*/2 * * * *'
max_instances: -1
min_instances: -1
postgres_pod_resources:
default_cpu_limit: "1"
default_cpu_request: 100m
default_memory_limit: 500Mi
default_memory_request: 100Mi
repair_period: 5m
resync_period: 30m
teams_api:
enable_team_superuser: false
enable_teams_api: false
pam_role_name: zalandos
protected_role_names:
- admin
team_admin_role: admin
team_api_role_configuration:
log_statement: all
timeouts:
pod_deletion_wait_timeout: 10m
pod_label_wait_timeout: 10m
ready_wait_interval: 4s
ready_wait_timeout: 30s
resource_check_interval: 3s
resource_check_timeout: 10m
users:
replication_username: standby
super_username: postgres
workers: 4
I create a config map zalando-operator/env with the following content:
apiVersion: v1
data:
AWS_ACCESS_KEY_ID: **********************
AWS_ENDPOINT: s3.eu-west-1.amazonaws.com
AWS_INSTANCE_PROFILE: "true"
AWS_REGION: eu-west-1
AWS_SECRET_ACCESS_KEY: ********************************
BACKUP_SCHEDULE: '*/2 * * * *'
USE_WALG_BACKUP: "true"
kind: ConfigMap
metadata:
creationTimestamp: "2020-08-26T21:02:41Z"
managedFields:
- apiVersion: v1
operation: Update
time: "2020-09-06T21:04:31Z"
name: env
namespace: zalando-operator
and all this variables have been applied and looped from pod:
# env|grep AWS
AWS_INSTANCE_PROFILE=true
AWS_ACCESS_KEY_ID=*******************
AWS_REGION=eu-west-1
AWS_ENDPOINT=s3.eu-west-1.amazonaws.com
AWS_SECRET_ACCESS_KEY=***************
# env|grep BACKUP
USE_WALG_BACKUP=true
BACKUP_SCHEDULE=*/2 * * * *
But... nothing happened, despite cluster is up and running.
Probably I missed something or did something incorrect. If there is a documentation for this point, please share it with me, but I haven't found it.
Many thanks,
Yaroslav