Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/keycloak] Update to 4.5.0.Final #8192

Merged
merged 2 commits into from
Oct 10, 2018

Conversation

unguiculus
Copy link
Member

  • The Docker image has added support for DNS_PING which is now used
    instead of JDBC_PING
  • The StatefulSet is updated to apps/v1

Signed-off-by: Reinhard Nägele unguiculus@gmail.com

Checklist

  • DCO signed
  • Chart Version bumped
  • Variables are documented in the README.md

@ey-bot ey-bot added the Contribution Allowed If the contributor has signed the DCO or the CNCF CLA (prior to the move to a DCO). label Oct 5, 2018
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 5, 2018
@unguiculus
Copy link
Member Author

@edclement @axdotl @stormmore @monotek Please review and test.

@axdotl
Copy link
Contributor

axdotl commented Oct 5, 2018

Unfortunately 4.5.0 doesn't work for me with MySQL. Seems to be an issue with the Liquibase migration.
I filed an issue https://issues.jboss.org/browse/KEYCLOAK-8501

@axdotl
Copy link
Contributor

axdotl commented Oct 5, 2018

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 5, 2018
@unguiculus
Copy link
Member Author

@Axotl Have you tried a fresh installation? Anyways, if this is bug in Keycloak itself, I don't think this should block the chart upgrade. I don't see what we should do about this. I'll try and do some more tests with Postgres.

@axdotl
Copy link
Contributor

axdotl commented Oct 5, 2018

Yes, did a fresh install. However, after a couple of failed restarts, a helm del (but keeping the database) and a helm install keycloak started successfully. But when increasing replica to 2, the same error ('Database error during release lock') occurred on keycloak-1.

Not sure whether this should block the chart upgrade, but as it seems not usable for all cases I've some doubts.

@stormmore
Copy link

I just got 4.5 working with the 3.4 chart and I sent my values to a thread on keycloak-users. I did have to modify them quite a bit and got KUBE_PING working

@stormmore
Copy link

It is worth noting that I had to give the default namespace serviceaccount get and view for pod. It would better if the chart created it's own serviceaccount, role / clusterrole and rolebinding / clusterrolebinding to allow the use of KUBE_PING

@unguiculus
Copy link
Member Author

@stormmore Any reason why you want to use KUBE_PING over DNS_PING? Support for DNS_PING was added in 4.5.0.Final, the PR for KUBE_PING was closed.

keycloak/keycloak-containers#151
keycloak/keycloak-containers#96

Can you share a link to your KUBE_PING setup?

@unguiculus
Copy link
Member Author

unguiculus commented Oct 5, 2018

I tested again with Postgres. An update from 4.2.1 to 4.5.0 worked without any issues.

@stormmore
Copy link

@unguiculus I had the choice since it wasn't "supported" by the 3.4 OOTB. I fully intended to compare KUBE_PING to DNS_PING

Here is my values file:

  init:
    image:
      repository: alpine
      tag: 3.7
      pullPolicy: IfNotPresent

  keycloak:
    replicas: 3

    image:
      repository: jboss/keycloak
      tag: 4.5.0.Final
      pullPolicy: IfNotPresent

      ## Optionally specify an array of imagePullSecrets.
      ## Secrets must be manually created in the namespace.
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ##
      pullSecrets: []
      #  - myRegistrKeySecretName

    securityContext:
      runAsUser: 1000
      fsGroup: 1000
      runAsNonRoot: true

    ## The path keycloak will be served from. To serve keycloak from the root path, use two quotes (e.g. "").
    basepath: "auth"

    ## Additional init containers, e. g. for providing custom themes
    extraInitContainers: |-
      - name: pg-isready
        image: "{{ .Values.global.db.image }}:{{ .Values.global.db.tag }}"
        env:
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: {{ .Release.Name }}-pg-auth
              key: POSTGRES_PASSWORD
        command:
        - sh
        - -c
        - |
          sleep 10; until pg_isready -h {{ .Release.Name }}-pg -U postgres -d postgres; do
            sleep 1;
          done;
          echo 'PostgreSQL OK ✓'
    ## Additional sidecar containers, e. g. for a database proxy, such as Google's cloudsql-proxy
    extraContainers: |

    ## Custom script that is run before Keycloak is started.
    preStartScript: |
      ln /opt/jboss/tools/docker-entrypoint.sh /opt/jboss/docker-entrypoint.sh
      exec /opt/jboss/docker-entrypoint.sh -b 0.0.0.0
      exit "$?"

    ## Additional arguments to start command e.g. -Dkeycloak.import= to load a realm
    extraArgs: ""

    ## Username for the initial Keycloak admin user
    username: graham.burgess

    ## Password for the initial Keycloak admin user
    ## If not set, a random 10 characters password will be used
    password: ""

    ## Allows the specification of additional environment variables for Keycloak
    extraEnv: |
      - name: KEYCLOAK_LOGLEVEL
        value: DEBUG
      # - name: WILDFLY_LOGLEVEL
      #   value: DEBUG
      # - name: CACHE_OWNERS
      #   value: "2"
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: JAVA_OPTS
        value: "-server -Xms128m -Xmx1024m -XX:MetaspaceSize=192M -XX:MaxMetaspaceSize=512m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true"
      # - name: JGROUPS_DISCOVERY_PROTOCOL
      #   value: "kubernetes.KUBE_PING"
      # - name: JGROUPS_DISCOVERY_PROPERTIES
      #   value: "namespace=rc"

    affinity: |
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app:  {{ template "keycloak.name" . }}
                release: "{{ .Release.Name }}"
              matchExpressions:
                - key: role
                  operator: NotIn
                  values:
                    - test
            topologyKey: kubernetes.io/hostname
        preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app:  {{ template "keycloak.name" . }}
                  release: "{{ .Release.Name }}"
                matchExpressions:
                  - key: role
                    operator: NotIn
                    values:
                      - test
              topologyKey: failure-domain.beta.kubernetes.io/zone

    nodeSelector: {}
    tolerations: []

    livenessProbe:
      initialDelaySeconds: 240
      timeoutSeconds: 5
    readinessProbe:
      initialDelaySeconds: 60
      timeoutSeconds: 1

    resources:
      limits:
        cpu: "1"
        memory: "4096Mi"
      requests:
        cpu: "500m"
        memory: "1024Mi"

    ## WildFly CLI configurations. They all end up in the file 'keycloak.cli' configured in the configmap whichn is
    ## executed on server startup.
    cli:
      ## Sets the node identifier to the node name (= pod name). Node identifiers have to be unique. They can have a
      ## maximum length of 23 characters. Thus, the chart's fullname template truncates its length accordingly.
      nodeIdentifier: |
        # Makes node identifier unique getting rid of a warning in the logs
        /subsystem=transactions:write-attribute(name=node-identifier, value=${jboss.node.name})

      logging: |
        # Allow log level to be configured via environment variable
        /subsystem=logging/console-handler=CONSOLE:write-attribute(name=level, value=${env.WILDFLY_LOGLEVEL:INFO})
        /subsystem=logging/root-logger=ROOT:write-attribute(name=level, value=${env.WILDFLY_LOGLEVEL:INFO})

        # Log only to console
        /subsystem=logging/root-logger=ROOT:write-attribute(name=handlers, value=[CONSOLE])

      reverseProxy: |
        /socket-binding-group=standard-sockets/socket-binding=proxy-https:add(port=443)
        /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=redirect-socket, value=proxy-https)
        /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding, value=true)

      # discovery: ""
      discovery: |
        /subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
        /subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
        /subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
        /subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})

        /subsystem=jgroups/stack=tcp:remove()
        /subsystem=jgroups/stack=tcp:add()
        /subsystem=jgroups/stack=tcp/transport=TCP:add(socket-binding="jgroups-tcp")
        /subsystem=jgroups/stack=tcp/protocol=kubernetes.KUBE_PING: add()
        /subsystem=jgroups/stack=tcp/protocol=kubernetes.KUBE_PING/property=namespace: add(value=${env.POD_NAMESPACE:default})
        /subsystem=jgroups/stack=tcp/protocol=MERGE3:add()
        /subsystem=jgroups/stack=tcp/protocol=FD_SOCK:add()
        /subsystem=jgroups/stack=tcp/protocol=FD_ALL:add()
        /subsystem=jgroups/stack=tcp/protocol=VERIFY_SUSPECT:add()
        /subsystem=jgroups/stack=tcp/protocol=pbcast.NAKACK2:add()
        /subsystem=jgroups/stack=tcp/protocol=UNICAST3:add()
        /subsystem=jgroups/stack=tcp/protocol=pbcast.STABLE:add()
        /subsystem=jgroups/stack=tcp/protocol=pbcast.GMS:add()
        /subsystem=jgroups/stack=tcp/protocol=MFC:add()
        /subsystem=jgroups/stack=tcp/protocol=FRAG2:add()


        /subsystem=jgroups/channel=ee:write-attribute(name=stack, value=tcp)
        /subsystem=jgroups/stack=udp:remove()
        /socket-binding-group=standard-sockets/socket-binding=jgroups-mping:remove()

        /interface=private:write-attribute(name=nic, value=eth0)
        /interface=private:undefine-attribute(name=inet-address)

      postgresql: ""
      # postgresql: |
      #   # Statements must be adapted for PostgreSQL. Additionally, we add a 'creation_timestamp' column.
      #   /subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=initialize_sql:add(value="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, creation_timestamp timestamp NOT NULL, cluster_name varchar(200) NOT NULL, ping_data bytea, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name))")
      #   /subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=insert_single_sql:add(value="INSERT INTO JGROUPSPING (own_addr, creation_timestamp, cluster_name, ping_data) values (?, NOW(), ?, ?)")

      # Custom CLI script
      custom: ""


    ## Add additional volumes and mounts, e. g. for custom themes
    extraVolumes: |
    extraVolumeMounts: |

    podDisruptionBudget: {}
      # maxUnavailable: 1
      # minAvailable: 1

    service:
      annotations: {}
      # service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"

      labels: {}
      # key: value

      ## ServiceType
      ## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
      type: ClusterIP

      ## Optional static port assignment for service type NodePort.
      # nodePort: 30000

      port: 80

    ## Ingress configuration.
    ## ref: https://kubernetes.io/docs/user-guide/ingress/
    ingress:
      enabled: true
      path: /auth

      annotations: ""

        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
        # ingress.kubernetes.io/affinity: cookie

      ## List of hosts for the ingress
      hosts:
        - rc.domain.com

      ## TLS configuration
      tls: []
      # - hosts:
      #     - keycloak.example.com
      #   secretName: tls-keycloak

    ## Persistence configuration
    persistence:
      # If true, the Postgres chart is deployed
      deployPostgres: false

      # The database vendor. Can be either "postgres", "mysql", "mariadb", or "h2"
      dbVendor: postgres

      ## The following values only apply if "deployPostgres" is set to "false"

      # Specifies an existing secret to be used for the database password
      existingSecret: "auth-pg-auth"

      # The key in the existing secret that stores the password
      existingSecretKey: POSTGRES_PASSWORD

      dbHost: auth-pg
      dbPort: 5432
      dbName: postgres
      dbUser: postgres

      # Only used if no existing secret is specified. In this case a new secret is created
      dbPassword: ""

  test:
    image:
      repository: unguiculus/docker-python3-phantomjs-selenium
      tag: v1
      pullPolicy: IfNotPresent

Here is my RBAC config:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
  name: pod-reader
  namespace: rc
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
  name: pod-reader-binding
  namespace: rc
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pod-reader
subjects:
- kind: ServiceAccount
  name: default
  namespace: rc

In reality, I will probably move to DNS_PING as I was having problems trying to ENV to configure KUBE_PING, but it would probably be good if the chart could support either even if upstream choice to default to DNS_PING

@unguiculus
Copy link
Member Author

In order to use KUBE_PING you need to install jgroups-kubernetes-common.jar as a module. This is not part of the official Docker image. See the PR I mentioned above that was closed. Your config suggests you still use the official image. So, this probably can't work.

* The Docker image has added support for DNS_PING which is now used
  instead of JDBC_PING
* The StatefulSet is updated to `apps/v1`

Signed-off-by: Reinhard Nägele <unguiculus@gmail.com>
@stormmore
Copy link

stormmore commented Oct 6, 2018

It is definitely in the image, I am getting a couple of Deprecated warnings and without the RBAC config, Keycloak complains about being unable to get pods. So I can only assume the module is there, cause it starts up and finds it's members.

(Edit: just for clarity, the deprecated warnings are around using KUBERNETES_ instead of OPENSHIFT_ vars)

@unguiculus
Copy link
Member Author

unguiculus commented Oct 6, 2018

I tested with MariaDB, a fresh installation and an upgrade from 4.2.1. Both worked without any problems.

@axdotl
Copy link
Contributor

axdotl commented Oct 8, 2018

I've tried it again, but the issue occurs every time.

@axdotl
Copy link
Contributor

axdotl commented Oct 8, 2018

I did some further testing. It seems not to be a bug in Keycloak itself.

When deploying keycloak as a Deployment with kubectl to a cluster, startup works fine (find manifest below).

I ran a helm install --dry-run and applied the StatefulSet via kubectl, startup fails again with issue "Database error during release lock".

Could this have something to do with the changed StatfulSet apiVersion?

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
      - name: keycloak
        image: jboss/keycloak:4.5.0.Final
        env:
        - name: KEYCLOAK_USER
          value: "admin"
        - name: KEYCLOAK_PASSWORD
          value: "admin"
        - name: PROXY_ADDRESS_FORWARDING
          value: "true"
        - name: DB_ADDR
          value: "mysqlproxy.infra"
        - name: DB_PORT
          value: "3306"
        - name: DB_DATABASE
          value: "kc_test_db"
        - name: DB_USER
          value: "kc-test"
        - name: DB_PASSWORD
          value: "somePwd"
        - name: DB_VENDOR
          value: "mysql"
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        readinessProbe:
          httpGet:
            path: /auth/realms/master
            port: 8080

statefulSet.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kc-test-keycloak
  labels:
    app: keycloak
    chart: keycloak-4.0.0
    release: "kc-test"
    heritage: "Tiller"
spec:
  selector:
    matchLabels:
      app: keycloak
      release: "kc-test"
  replicas: 1
  serviceName: kc-test-keycloak-headless
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: keycloak
        release: "kc-test"
    spec:
      securityContext:
        fsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000

      containers:
        - name: keycloak
          image: "jboss/keycloak:4.5.0.Final"
          imagePullPolicy: IfNotPresent
          env:
            - name: KEYCLOAK_USER
              value: admin
            - name: KEYCLOAK_PASSWORD
              value: "admin"

            - name: DB_VENDOR
              value: "mysql"
            - name: DB_ADDR
              value: "mysqlproxy.infra"
            - name: DB_PORT
              value: "3306"
            - name: DB_DATABASE
              value: "kc_test_db"
            - name: DB_USER
              value: "kc-test"
            - name: DB_PASSWORD
              value: "somePwd"

          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /auth/
              port: http
            initialDelaySeconds: 120
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: /auth/
              port: http
            initialDelaySeconds: 30
            timeoutSeconds: 1
          resources:
            limits:
              cpu: 500m
              memory: 1024Mi
            requests:
              cpu: 300m
              memory: 512Mi

      terminationGracePeriodSeconds: 60

@unguiculus
Copy link
Member Author

I just tested with MySql and two replicas. Keycloak started successfully and helm test passed.

@swistaczek
Copy link
Contributor

swistaczek commented Oct 10, 2018

Looks good to me! @axdotl please unblock process :).

@unguiculus
Copy link
Member Author

@swistaczek Did you test it? What database? How many replicas?

@swistaczek
Copy link
Contributor

@unguiculus yes, I tested this code with Istio 1.0.2 and google cloud sql (cloud-sql-proxy).

@unguiculus
Copy link
Member Author

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 10, 2018
@unguiculus
Copy link
Member Author

/retest

@scottrigby
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 10, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: scottrigby, unguiculus

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 2df1110 into helm:master Oct 10, 2018
@vsomasvr
Copy link

@unguiculus That's all fantastic work! I have a question related to deploying to Google Kubernetes Engine.

I have tried to deploy using external postgres db (using the values.yaml file specified at the bottom).
I observe the following behavior:

  1. Logs from the first two pods created after deployment indicate that the cluster was not formed

pod#1 Log

Received new cluster view for channel ejb: [keycloak-0|0] (1) [keycloak-0]

pod#2 Log

Received new cluster view for channel ejb: [keycloak-1|0] (1) [keycloak-1]

At this state the application does not work properly (the authentication itself fails)

  1. When scaling down to 1 and scaling up to 2, this time I see cluster trying to form, But fails eventually causing the pod#2 crash (and this keeps crashing forever after every restart)

pod#1 log

Received new cluster view for channel ejb: [keycloak-0|1] (2) [keycloak-0, keycloak-1]

pod#2 log

Received new cluster view for channel ejb: [keycloak-0|1] (2) [keycloak-0, keycloak-1]

  1. When there's only one replica then app is fine and works as expected, But that's of no use as its without keycloak cluster

Any insight in figuring the issue out would be great help

keycloak:
  replicas: 2
  username: admin
  password: password
  extraEnv: |
    - name: PROXY_ADDRESS_FORWARDING
      value: "true"
  service:
    type: NodePort
    port: 8080
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.global-static-ip-name: "keycloak-static-ip"
      kubernetes.io/ingress.allow-http: "false"
    path: /*
    hosts:
      - auth.example.com
    tls:
      - secretName: keycloak-tls
  persistence:

    # Disable deployment of the PostgreSQL chart
    deployPostgres: false

    # The database vendor. Can be either "postgres", "mysql", "mariadb", or "h2"
    dbVendor: postgres

    ## The following values only apply if "deployPostgres" is set to "false"

    # Optionally specify an existing secret
    # existingSecret: "my-database-password-secret"
    # existingSecretKey: "password-key in-my-database-secret"

    dbName: keycloakdb
    dbHost: ip-of-db-host
    dbPort: 5432 # 5432 is PostgreSQL's default port. For MySQL it would be 3306
    dbUser: keycloakuser

    # Only used if no existing secret is specified. In this case a new secret is created
    dbPassword: password

@sassko
Copy link

sassko commented Oct 17, 2018

Testing the KUBE_PING version of stormmore directly works, but >1 replicas with default settings results in the issues:

14:18:41,243 DEBUG [org.infinispan.util.ModuleProperties] (MSC service thread 1-1) No module command extensions to load
14:18:41,270 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000078: Starting JGroups channel ejb
14:18:41,275 DEBUG [org.jboss.as.clustering.jgroups] (MSC service thread 1-1) Creating fork channel web from channel ejb
14:18:41,276 INFO [org.infinispan.CLUSTER] (MSC service thread 1-1) ISPN000094: Received new cluster view for channel ejb: [keycloak-0|1] (2) [keycloak-0, keycloak-1]
14:18:41,276 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) Joined: [keycloak-0, keycloak-1], Left: []
[...]
14:19:45,000 DEBUG [org.jgroups.protocols.FD_ALL] (Timer runner-1,null,null) haven't received a heartbeat from keycloak-0 for 64785 ms, adding it to suspect list
14:19:45,001 DEBUG [org.jgroups.protocols.FD_ALL] (Timer runner-1,null,null) keycloak-1: suspecting [keycloak-0]
14:19:45,006 DEBUG [org.jgroups.protocols.FD_SOCK] (thread-12,ejb,keycloak-1) keycloak-1: broadcasting unsuspect(keycloak-0)
14:19:45,007 DEBUG [org.jgroups.protocols.FD_SOCK] (thread-12,ejb,keycloak-1) keycloak-1: broadcasting unsuspect(keycloak-0)
14:20:26,716 INFO [org.jboss.as.server] (Thread-2) WFLYSRV0220: Server shutdown has been requested via an OS signal

Does anybody have an idea what might be the reason?

thx

darioblanco pushed a commit to minddocdev/charts that referenced this pull request Oct 22, 2018
* The Docker image has added support for DNS_PING which is now used
  instead of JDBC_PING
* The StatefulSet is updated to `apps/v1`

Signed-off-by: Reinhard Nägele <unguiculus@gmail.com>
emas80 pushed a commit to faceit/charts that referenced this pull request Oct 24, 2018
* The Docker image has added support for DNS_PING which is now used
  instead of JDBC_PING
* The StatefulSet is updated to `apps/v1`

Signed-off-by: Reinhard Nägele <unguiculus@gmail.com>
Jnig pushed a commit to Jnig/charts that referenced this pull request Nov 13, 2018
* The Docker image has added support for DNS_PING which is now used
  instead of JDBC_PING
* The StatefulSet is updated to `apps/v1`

Signed-off-by: Reinhard Nägele <unguiculus@gmail.com>
Signed-off-by: Jakob Niggel <info@jakobniggel.de>
wgiddens pushed a commit to wgiddens/charts that referenced this pull request Jan 18, 2019
* The Docker image has added support for DNS_PING which is now used
  instead of JDBC_PING
* The StatefulSet is updated to `apps/v1`

Signed-off-by: Reinhard Nägele <unguiculus@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. Contribution Allowed If the contributor has signed the DCO or the CNCF CLA (prior to the move to a DCO). lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants