Skip to content

Allow pod environment variables to also be sourced from a secret #946

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jul 30, 2020
Merged
2 changes: 2 additions & 0 deletions charts/postgres-operator/crds/operatorconfigurations.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,8 @@ spec:
type: string
pod_environment_configmap:
type: string
pod_environment_secret:
type: string
pod_management_policy:
type: string
enum:
Expand Down
2 changes: 2 additions & 0 deletions charts/postgres-operator/values-crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,8 @@ configKubernetes:
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# namespaced name of the ConfigMap with environment variables to populate on every pod
# pod_environment_configmap: "default/my-custom-config"
# name of the Secret (in cluster namespace) with environment variables to populate on every pod
# pod_environment_secret: "my-custom-secret"

# specify the pod management policy of stateful sets of Postgres clusters
pod_management_policy: "ordered_ready"
Expand Down
2 changes: 2 additions & 0 deletions charts/postgres-operator/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,8 @@ configKubernetes:
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# namespaced name of the ConfigMap with environment variables to populate on every pod
# pod_environment_configmap: "default/my-custom-config"
# name of the Secret (in cluster namespace) with environment variables to populate on every pod
# pod_environment_secret: "my-custom-secret"

# specify the pod management policy of stateful sets of Postgres clusters
pod_management_policy: "ordered_ready"
Expand Down
64 changes: 59 additions & 5 deletions docs/administrator.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,11 +319,18 @@ spec:


## Custom Pod Environment Variables

It is possible to configure a ConfigMap which is used by the Postgres pods as
It is possible to configure a ConfigMap as well as a Secret which are used by the Postgres pods as
an additional provider for environment variables. One use case is to customize
the Spilo image and configure it with environment variables. The ConfigMap with
the additional settings is referenced in the operator's main configuration.
the Spilo image and configure it with environment variables. Another case could be to provide custom
cloud provider or backup settings.

In general the Operator will give preference to the globally configured variables, to not have the custom
ones interfere with core functionality. Variables with the 'WAL_' and 'LOG_' prefix can be overwritten though, to allow
backup and logshipping to be specified differently.


### Via ConfigMap
The ConfigMap with the additional settings is referenced in the operator's main configuration.
A namespace can be specified along with the name. If left out, the configured
default namespace of your K8s client will be used and if the ConfigMap is not
found there, the Postgres cluster's namespace is taken when different:
Expand Down Expand Up @@ -365,7 +372,54 @@ data:
MY_CUSTOM_VAR: value
```

This ConfigMap is then added as a source of environment variables to the
The key-value pairs of the ConfigMap are then added as environment variables to the
Postgres StatefulSet/pods.


### Via Secret
The Secret with the additional variables is referenced in the operator's main configuration.
To protect the values of the secret from being exposed in the pod spec they are each referenced
as SecretKeyRef.
This does not allow for the secret to be in a different namespace as the pods though

**postgres-operator ConfigMap**

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-operator
data:
# referencing secret with custom environment variables
pod_environment_secret: postgres-pod-secrets
```

**OperatorConfiguration**

```yaml
apiVersion: "acid.zalan.do/v1"
kind: OperatorConfiguration
metadata:
name: postgresql-operator-configuration
configuration:
kubernetes:
# referencing secret with custom environment variables
pod_environment_secret: postgres-pod-secrets
```

**referenced Secret `postgres-pod-secrets`**

```yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-pod-secrets
namespace: default
data:
MY_CUSTOM_VAR: dmFsdWU=
```

The key-value pairs of the Secret are all accessible as environment variables to the
Postgres StatefulSet/pods.

## Limiting the number of min and max instances in clusters
Expand Down
1 change: 1 addition & 0 deletions manifests/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ data:
# pod_antiaffinity_topology_key: "kubernetes.io/hostname"
pod_deletion_wait_timeout: 10m
# pod_environment_configmap: "default/my-custom-config"
# pod_environment_secret: "my-custom-secret"
pod_label_wait_timeout: 10m
pod_management_policy: "ordered_ready"
pod_role_label: spilo-role
Expand Down
2 changes: 2 additions & 0 deletions manifests/operatorconfiguration.crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,8 @@ spec:
type: string
pod_environment_configmap:
type: string
pod_environment_secret:
type: string
pod_management_policy:
type: string
enum:
Expand Down
1 change: 1 addition & 0 deletions manifests/postgresql-operator-default-configuration.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ configuration:
pdb_name_format: "postgres-{cluster}-pdb"
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# pod_environment_configmap: "default/my-custom-config"
# pod_environment_secret: "my-custom-secret"
pod_management_policy: "ordered_ready"
# pod_priority_class_name: ""
pod_role_label: spilo-role
Expand Down
3 changes: 3 additions & 0 deletions pkg/apis/acid.zalan.do/v1/crds.go
Original file line number Diff line number Diff line change
Expand Up @@ -942,6 +942,9 @@ var OperatorConfigCRDResourceValidation = apiextv1beta1.CustomResourceValidation
"pod_environment_configmap": {
Type: "string",
},
"pod_environment_secret": {
Type: "string",
},
"pod_management_policy": {
Type: "string",
Enum: []apiextv1beta1.JSON{
Expand Down
1 change: 1 addition & 0 deletions pkg/apis/acid.zalan.do/v1/operator_configuration_type.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ type KubernetesMetaConfiguration struct {
// TODO: use a proper toleration structure?
PodToleration map[string]string `json:"toleration,omitempty"`
PodEnvironmentConfigMap spec.NamespacedName `json:"pod_environment_configmap,omitempty"`
PodEnvironmentSecret string `json:"pod_environment_secret,omitempty"`
PodPriorityClassName string `json:"pod_priority_class_name,omitempty"`
MasterPodMoveTimeout Duration `json:"master_pod_move_timeout,omitempty"`
EnablePodAntiAffinity bool `json:"enable_pod_antiaffinity,omitempty"`
Expand Down
159 changes: 110 additions & 49 deletions pkg/cluster/k8sres.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import (
"path"
"sort"
"strconv"
"strings"

"github.com/sirupsen/logrus"

Expand All @@ -20,7 +21,6 @@ import (

acidv1 "github.com/zalando/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando/postgres-operator/pkg/spec"
pkgspec "github.com/zalando/postgres-operator/pkg/spec"
"github.com/zalando/postgres-operator/pkg/util"
"github.com/zalando/postgres-operator/pkg/util/config"
"github.com/zalando/postgres-operator/pkg/util/constants"
Expand Down Expand Up @@ -715,28 +715,6 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri
envVars = append(envVars, v1.EnvVar{Name: "SPILO_CONFIGURATION", Value: spiloConfiguration})
}

if c.OpConfig.WALES3Bucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}

if c.OpConfig.WALGSBucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}

if c.OpConfig.GCPCredentials != "" {
envVars = append(envVars, v1.EnvVar{Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
}

if c.OpConfig.LogS3Bucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "LOG_S3_BUCKET", Value: c.OpConfig.LogS3Bucket})
envVars = append(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = append(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""})
}

if c.patroniUsesKubernetes() {
envVars = append(envVars, v1.EnvVar{Name: "DCS_ENABLE_KUBERNETES_API", Value: "true"})
} else {
Expand All @@ -755,10 +733,34 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri
envVars = append(envVars, c.generateStandbyEnvironment(standbyDescription)...)
}

// add vars taken from pod_environment_configmap and pod_environment_secret first
// (to allow them to override the globals set in the operator config)
if len(customPodEnvVarsList) > 0 {
envVars = append(envVars, customPodEnvVarsList...)
}

if c.OpConfig.WALES3Bucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}

if c.OpConfig.WALGSBucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "WAL_GS_BUCKET", Value: c.OpConfig.WALGSBucket})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}

if c.OpConfig.GCPCredentials != "" {
envVars = append(envVars, v1.EnvVar{Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: c.OpConfig.GCPCredentials})
}

if c.OpConfig.LogS3Bucket != "" {
envVars = append(envVars, v1.EnvVar{Name: "LOG_S3_BUCKET", Value: c.OpConfig.LogS3Bucket})
envVars = append(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
envVars = append(envVars, v1.EnvVar{Name: "LOG_BUCKET_SCOPE_PREFIX", Value: ""})
}

return envVars
}

Expand All @@ -777,13 +779,81 @@ func deduplicateEnvVars(input []v1.EnvVar, containerName string, logger *logrus.
result = append(result, input[i])
} else if names[va.Name] == 1 {
names[va.Name]++
logger.Warningf("variable %q is defined in %q more than once, the subsequent definitions are ignored",
va.Name, containerName)

// Some variables (those to configure the WAL_ and LOG_ shipping) may be overriden, only log as info
if strings.HasPrefix(va.Name, "WAL_") || strings.HasPrefix(va.Name, "LOG_") {
logger.Infof("global variable %q has been overwritten by configmap/secret for container %q",
va.Name, containerName)
} else {
logger.Warningf("variable %q is defined in %q more than once, the subsequent definitions are ignored",
va.Name, containerName)
}
}
}
return result
}

// Return list of variables the pod recieved from the configured ConfigMap
func (c *Cluster) getPodEnvironmentConfigMapVariables() ([]v1.EnvVar, error) {
configMapPodEnvVarsList := make([]v1.EnvVar, 0)

if c.OpConfig.PodEnvironmentConfigMap.Name == "" {
return configMapPodEnvVarsList, nil
}

cm, err := c.KubeClient.ConfigMaps(c.OpConfig.PodEnvironmentConfigMap.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
if err != nil {
// if not found, try again using the cluster's namespace if it's different (old behavior)
if k8sutil.ResourceNotFound(err) && c.Namespace != c.OpConfig.PodEnvironmentConfigMap.Namespace {
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
}
if err != nil {
return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err)
}
}
for k, v := range cm.Data {
configMapPodEnvVarsList = append(configMapPodEnvVarsList, v1.EnvVar{Name: k, Value: v})
}
return configMapPodEnvVarsList, nil
}

// Return list of variables the pod recieved from the configured Secret
func (c *Cluster) getPodEnvironmentSecretVariables() ([]v1.EnvVar, error) {
secretPodEnvVarsList := make([]v1.EnvVar, 0)

if c.OpConfig.PodEnvironmentSecret == "" {
return secretPodEnvVarsList, nil
}

secret, err := c.KubeClient.Secrets(c.OpConfig.PodEnvironmentSecret).Get(
context.TODO(),
c.OpConfig.PodEnvironmentSecret,
metav1.GetOptions{})
if err != nil {
return nil, fmt.Errorf("could not read Secret PodEnvironmentSecretName: %v", err)
}

for k := range secret.Data {
secretPodEnvVarsList = append(secretPodEnvVarsList,
v1.EnvVar{Name: k, ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: c.OpConfig.PodEnvironmentSecret,
},
Key: k,
},
}})
}

return secretPodEnvVarsList, nil
}

func getSidecarContainer(sidecar acidv1.Sidecar, index int, resources *v1.ResourceRequirements) *v1.Container {
name := sidecar.Name
if name == "" {
Expand Down Expand Up @@ -943,32 +1013,23 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*appsv1.Statef
initContainers = spec.InitContainers
}

customPodEnvVarsList := make([]v1.EnvVar, 0)
// fetch env vars from custom ConfigMap
configMapEnvVarsList, err := c.getPodEnvironmentConfigMapVariables()
if err != nil {
return nil, err
}

if c.OpConfig.PodEnvironmentConfigMap != (pkgspec.NamespacedName{}) {
var cm *v1.ConfigMap
cm, err = c.KubeClient.ConfigMaps(c.OpConfig.PodEnvironmentConfigMap.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
if err != nil {
// if not found, try again using the cluster's namespace if it's different (old behavior)
if k8sutil.ResourceNotFound(err) && c.Namespace != c.OpConfig.PodEnvironmentConfigMap.Namespace {
cm, err = c.KubeClient.ConfigMaps(c.Namespace).Get(
context.TODO(),
c.OpConfig.PodEnvironmentConfigMap.Name,
metav1.GetOptions{})
}
if err != nil {
return nil, fmt.Errorf("could not read PodEnvironmentConfigMap: %v", err)
}
}
for k, v := range cm.Data {
customPodEnvVarsList = append(customPodEnvVarsList, v1.EnvVar{Name: k, Value: v})
}
sort.Slice(customPodEnvVarsList,
func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name })
// fetch env vars from custom ConfigMap
secretEnvVarsList, err := c.getPodEnvironmentSecretVariables()
if err != nil {
return nil, err
}

// concat all custom pod env vars and sort them
customPodEnvVarsList := append(configMapEnvVarsList, secretEnvVarsList...)
sort.Slice(customPodEnvVarsList,
func(i, j int) bool { return customPodEnvVarsList[i].Name < customPodEnvVarsList[j].Name })

if spec.StandbyCluster != nil && spec.StandbyCluster.S3WalPath == "" {
return nil, fmt.Errorf("s3_wal_path is empty for standby cluster")
}
Expand Down
Loading