Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operator can't update Minio and Console images #241

Closed
immanuelfodor opened this issue Aug 7, 2020 · 17 comments · Fixed by #247
Closed

Operator can't update Minio and Console images #241

immanuelfodor opened this issue Aug 7, 2020 · 17 comments · Fixed by #247
Assignees
Labels

Comments

@immanuelfodor
Copy link

immanuelfodor commented Aug 7, 2020

Expected Behavior

Upon changing the Minio and Console image tags in the Tenant specification, the operator updates these instances automatically.

Current Behavior

The operator pod log says updating the image is unsuccessful and changing this field is forbidden.

Possible Solution

  • Extending the list of allowed fields to change for updating Minio and Console. The operator implements the image update logic.
  • Until the operator allows Minio and Console image updates, a manual update guide is provided that doesn't conflict with the operator deployment in case new images are released from the two managed services.

Steps to Reproduce (for bugs)

  1. Deploy the operator, then create a Tenant spec with Console
  2. Wait a couple of days, new Minio and Console versions are released
  3. You try to update the image tags and apply the new Tenant spec to the operator
  4. The operator says it contains forbidden changes, see the logs below

Context

I'd like to make the following change to the Tenant spec, then apply it to the cluster:

diff --git a/k8s/operators/minio/minio-tenant-console.yml b/k8s/operators/minio/minio-tenant-console.yml
index 3435c52..f7f3826 100644
--- a/k8s/operators/minio/minio-tenant-console.yml
+++ b/k8s/operators/minio/minio-tenant-console.yml
@@ -104,7 +104,7 @@ spec:
       prometheus.io/port: "9000"
       prometheus.io/scrape: "true"
   ## Registry location and Tag to download MinIO Server image
-  image: minio/minio:RELEASE.2020-07-31T03-39-05Z
+  image: minio/minio:RELEASE.2020-08-07T01-23-07Z
   ## Secret with credentials to be used by MinIO instance.
   credsSecret:
     name: minio-creds-secret
@@ -137,7 +137,7 @@ spec:
 
   ## Define configuration for Console (Graphical user interface for MinIO)
   console:
-    image: minio/console:v0.3.4
+    image: minio/console:v0.3.8
     replicas: 1
     consoleSecret:
       name: console-secret

The operator says:

...
I0807 05:02:23.524437       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 05:02:53.504581       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 05:02:53.517455       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 05:03:23.507932       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 05:03:23.520986       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 05:03:27.891891       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 05:04:19.182108       1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
E0807 05:04:19.187459       1 main-controller.go:332] error syncing 'minio/minio': StatefulSet.apps "minio-zone-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I0807 05:04:19.197445       1 main-controller.go:327] Successfully synced 'minio/minio'
E0807 05:04:19.242258       1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
E0807 05:04:23.503458       1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
E0807 05:04:53.511843       1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
I0807 05:05:23.522628       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 05:05:35.512001       1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
E0807 05:05:35.515879       1 main-controller.go:332] error syncing 'minio/minio': StatefulSet.apps "minio-zone-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I0807 05:05:35.526066       1 main-controller.go:327] Successfully synced 'minio/minio'
E0807 05:05:35.535889       1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
...

Regression

Operator version v3.0.10 but the problem existed before this release. Until no data was stored in the cluster, I'd delete the Tenant deployment, then recreate it from scratch. However, this method assigns new storage to the Minio pods as I use a dynamically provisioned storage class (NFS client provisioner). Additionally, tearing down Minio then recreating the cluster causes downtime, it would be great if the operator could handle it smoothly.

Your Environment

  • Version used (minio-operator): v3.0.10
  • Environment name and version (e.g. kubernetes v1.17.2): kubernetes v1.18.6 (RKE)
  • Server type and version: Proxmox
  • Operating System and version (uname -a): Linux rke-node1 4.18.0-147.8.1.el8_1.x86_64 Add Minio operator  #1 SMP Thu Apr 9 13:49:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux (CentOS 8)
  • Link to your deployment file: the original Tenant spec with line comments removed
apiVersion: minio.min.io/v1
kind: Tenant
metadata:
  name: minio
  namespace: minio
spec:
  metadata:
    labels:
      app: minio
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"
  image: minio/minio:RELEASE.2020-07-31T03-39-05Z
  credsSecret:
    name: minio-creds-secret
  zones:
    - volumesPerServer: 1
      servers: 4
      volumeClaimTemplate:
        metadata:
          name: data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Ti
          storageClassName: nfs-client
  mountPath: /export
  console:
    image: minio/console:v0.3.4
    replicas: 1
    consoleSecret:
      name: console-secret
    metadata:
      labels:
        app: console
    externalCertSecret:
      name: minio-tls-cert
      type: cert-manager.io/v1alpha2
  externalCertSecret:
    name: minio-tls-cert
    type: cert-manager.io/v1alpha2
  requestAutoCert: false
  podManagementPolicy: Parallel
  liveness:
    initialDelaySeconds: 10
    periodSeconds: 1
    timeoutSeconds: 1
@nitisht nitisht self-assigned this Aug 7, 2020
@nitisht nitisht added the bug Something isn't working label Aug 7, 2020
@nitisht
Copy link
Contributor

nitisht commented Aug 7, 2020

Thanks for reporting @immanuelfodor we'll take a look

@harshavardhana
Copy link
Member

  zones:
    - volumesPerServer: 1
      servers: 4
      volumeClaimTemplate:
        metadata:
          name: data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Ti
          storageClassName: nfs-client

FWIW @immanuelfodor using NFS as the backend drives for MinIO is not recommended and supported. You should bring regular drives. NFS doesn't provide the right consistency guarantees which disk filesystems can provide so you are prone to unexpected bugs which are a fact if you using NFS let alone use it in distributed mode.

@harshavardhana
Copy link
Member

I0807 05:03:23.520986 1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 05:03:27.891891 1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 05:04:19.182108 1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
E0807 05:04:19.187459 1 main-controller.go:332] error syncing 'minio/minio': StatefulSet.apps "minio-zone-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I0807 05:04:19.197445 1 main-controller.go:327] Successfully synced 'minio/minio'
E0807 05:04:19.242258 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
E0807 05:04:23.503458 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
E0807 05:04:53.511843 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
I0807 05:05:23.522628 1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 05:05:35.512001 1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
E0807 05:05:35.515879 1 main-controller.go:332] error syncing 'minio/minio': StatefulSet.apps "minio-zone-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I0807 05:05:35.526066 1 main-controller.go:327] Successfully synced 'minio/minio'
E0807 05:05:35.535889 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
...

I tried reproducing but couldn't

I0807 09:36:08.538566       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:08.949721       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:08.972481       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:19.702443       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:19.735666       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:21.373689       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:21.400228       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:24.948531       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:24.964613       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:27.551141       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:27.608324       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:28.696493       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:28.767293       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:51.438803       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:36:51.536337       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:37:21.354781       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:37:21.377449       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:37:50.460165       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 09:38:09.958929       1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
I0807 09:38:10.071693       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:10.174322       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:10.275427       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:21.362896       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:21.388559       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:43.613785       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:43.634781       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:51.354186       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:51.377393       1 main-controller.go:327] Successfully synced 'default/minio'
I0807 09:38:51.581752       1 main-controller.go:327] Successfully synced 'default/minio'

NOTE: I am using our own direct-csi here with properly local drives

diff --git a/examples/tenant.yaml b/examples/tenant.yaml
index 4f3ad14..fd3dc80 100644
--- a/examples/tenant.yaml
+++ b/examples/tenant.yaml
@@ -27,7 +27,7 @@ spec:
       prometheus.io/scrape: "true"
 
   ## Registry location and Tag to download MinIO Server image
-  image: minio/minio:RELEASE.2020-08-05T21-34-13Z
+  image: minio/minio:RELEASE.2020-08-07T01-23-07Z
   zones:
     - servers: 4
       volumesPerServer: 4
@@ -39,9 +39,9 @@ spec:
             - ReadWriteOnce
           resources:
             requests:
-              storage: 1Ti
+              storage: 10Gi
         # if you have direct-csi installed https://github.com/minio/direct-csi
-        # storageClassName: direct.csi.min.io
+          storageClassName: direct.csi.min.io
   ## Mount path where PV will be mounted inside container(s). Defaults to "/export".
   mountPath: /export
   ## Sub path inside Mount path where MinIO starts. Defaults to "".

@harshavardhana harshavardhana removed the bug Something isn't working label Aug 7, 2020
@immanuelfodor
Copy link
Author

Hmm, so you think the problem lies at around using NFS in the first place, interesting. So if I replace it with direct-csi, it could apply the change instantly? I saw the direct-csi remark in the readme but as the NFS provisioning was already in place, I went for using that instead of setting up something yet unknown. This is a 3-node k8s cluster running on 3 identical VMs on NVMe, could I use the VMs filesystem with direct-csi or it is a must to use a/some raw block device?

@harshavardhana
Copy link
Member

Hmm, so you think the problem lies at around using NFS in the first place, interesting. So if I replace it with direct-csi, it could apply the change instantly? I saw the direct-csi remark in the readme but as the NFS provisioning was already in place, I went for using that instead of setting up something yet unknown. This is a 3-node k8s cluster running on 3 identical VMs on NVMe, could I use the VMs filesystem with direct-csi or it is a must to use a/some raw block device?

I don't think it is related to CSI - that is a general recommendation on why NFS should be avoided, MinIO over NAS is not a good idea unless you are using minio gateway nas

I0807 05:03:27.891891 1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 05:04:19.182108 1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
E0807 05:04:19.187459 1 main-controller.go:332] error syncing 'minio/minio': StatefulSet.apps "minio-zone-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I0807 05:04:19.197445 1 main-controller.go:327] Successfully synced 'minio/minio'
E0807 05:04:19.242258 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
E0807 05:04:23.503458 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
E0807 05:04:53.511843 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update
I0807 05:05:23.522628 1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
I0807 05:05:35.512001 1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully
E0807 05:05:35.515879 1 main-controller.go:332] error syncing 'minio/minio': StatefulSet.apps "minio-zone-0" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I0807 05:05:35.526066 1 main-controller.go:327] Successfully synced 'minio/minio'
E0807 05:05:35.535889 1 main-controller.go:332] error syncing 'minio/minio': MinIO doesn't seem to have enough quorum to proceed with binary update

Are you saying that updates were never really applied? @immanuelfodor

@immanuelfodor
Copy link
Author

immanuelfodor commented Aug 7, 2020

Oh, thanks, I was not familiar with that command. However, I feel I couldn't start the pods this way (gateway nas /export), there is no pod command key in the CRD. Would be great to keep the NFS storage this way instead of filling up the limited node filesystem.

Yes, it has never applied it, the pods are untouched. If I delete the operator pod, and it's recreated by k8s, the pod output hangs at the first Attempting... line and never produces another log line.

I0807 14:26:43.042383       1 main.go:66] Starting MinIO Operator
I0807 14:26:43.043236       1 main-controller.go:201] Setting up event handlers
I0807 14:26:43.043269       1 main-controller.go:254] Starting Tenant controller
I0807 14:26:43.043273       1 main-controller.go:257] Waiting for informer caches to sync
I0807 14:26:43.143530       1 main-controller.go:262] Starting workers
I0807 14:26:43.157222       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 14:26:43.176647       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z

@immanuelfodor
Copy link
Author

Well, it has just produced some new error lines, then hanged again at the next attempt:

I0807 14:26:43.042383       1 main.go:66] Starting MinIO Operator
I0807 14:26:43.043236       1 main-controller.go:201] Setting up event handlers
I0807 14:26:43.043269       1 main-controller.go:254] Starting Tenant controller
I0807 14:26:43.043273       1 main-controller.go:257] Waiting for informer caches to sync
I0807 14:26:43.143530       1 main-controller.go:262] Starting workers
I0807 14:26:43.157222       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 14:26:43.176647       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z
E0807 14:45:08.383568       1 main-controller.go:332] error syncing 'minio/minio': MinIO Server binary update failed with We encountered an internal error, please try again. (Server update failed, please do not restart the servers yet: failed with rename /usr/bin/.minio.old /usr/bin/..minio.old.old: no such file or directory)
I0807 14:45:08.396921       1 main-controller.go:327] Successfully synced 'minio/minio'
I0807 14:45:08.504628       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z

@harshavardhana
Copy link
Member

harshavardhana commented Aug 7, 2020

E0807 14:45:08.383568 1 main-controller.go:332] error syncing 'minio/minio': MinIO Server binary update failed with We encountered an internal error, please try again. (Server update failed, please do not restart the servers yet: failed with rename /usr/bin/.minio.old /usr/bin/..minio.old.old: no such file or directory)

There are fundamental problems that are happening on the system - most probably this is because I think your usage of NFS @immanuelfodor - looks like the local FS for the container is not able to handle the even simple update concurrency requirements.

@immanuelfodor
Copy link
Author

Now I'm fully convinced to switch to direct-csi, I'll look into it on the weekend. I'd keep this ticket open until that and report back what happened if it's okay for you. Maybe this will worth a huge ⚠️ warning in the readme if it'll work fine with it 😃

@immanuelfodor
Copy link
Author

Trying out direct-csi, I've bumped into minio/directpv#11 as the https://github.com/minio/operator/blob/master/docs/using-direct-csi.md page doesn't mention the new KUBELET_DIR_PATH variable which was added in minio/directpv#14 . With RKE, it has to be KUBELET_DIR_PATH=/var/lib/kubelet to avoid the same pod mount error the original reporter had. I think it would be desirable to include this variable here in the docs.

@harshavardhana
Copy link
Member

Trying out direct-csi, I've bumped into minio/direct-csi#11 as the https://github.com/minio/operator/blob/master/docs/using-direct-csi.md page doesn't mention the new KUBELET_DIR_PATH variable which was added in minio/direct-csi#14 . With RKE, it has to be KUBELET_DIR_PATH=/var/lib/kubelet to avoid the same pod mount error the original reporter had. I think it would be desirable to include this variable here in the docs.

Understand it was an issue with the master branch. Moved the offending commit to dev branch.

@immanuelfodor
Copy link
Author

Yes, the direct-csi apply is fixed (thank you for that), my comment above is about adding the KUBELET_DIR_PATH to the csi setup docs here is this repo. If the variable is not set, it produces invalid hostPath volumes that prevents the pods start as in minio/directpv#11 Or moving the commit fixed it as well?

@immanuelfodor
Copy link
Author

In the meantime, somebody has created the PR for it: #244

@immanuelfodor
Copy link
Author

immanuelfodor commented Aug 8, 2020

I could finally try out the operator with the direct-csi driver if it can update the images, and half-yes, the Minio image was updated fine:

I0808 17:31:19.570268       1 main-controller.go:598] Attempting Tenant minio MinIO server version minio/minio:RELEASE.2020-07-31T03-39-05Z, to: minio/minio:RELEASE.2020-08-07T01-23-07Z                                                    
I0808 17:33:17.530879       1 main-controller.go:652] Applied MinIO server binary update to the tenant minio from: 2020-07-31T03:39:05Z, to: 2020-08-07T01-23-07Z successfully                                                               
I0808 17:33:17.558978       1 main-controller.go:327] Successfully synced 'minio/minio'                                                                       

So it was indeed an NFS-related problem. Maybe a huge warning in the readme would be great to avoid using NFS as storage driver.

However, the console pod is still untouched after 10 minutes. I was trying to update both images as in:

diff --git a/k8s/operators/minio/minio-tenant-console.yml b/k8s/operators/minio/minio-tenant-console.yml
index 3435c52..f7f3826 100644
--- a/k8s/operators/minio/minio-tenant-console.yml
+++ b/k8s/operators/minio/minio-tenant-console.yml
@@ -104,7 +104,7 @@ spec:
       prometheus.io/port: "9000"
       prometheus.io/scrape: "true"
   ## Registry location and Tag to download MinIO Server image
-  image: minio/minio:RELEASE.2020-07-31T03-39-05Z
+  image: minio/minio:RELEASE.2020-08-07T01-23-07Z
   ## Secret with credentials to be used by MinIO instance.
   credsSecret:
     name: minio-creds-secret
@@ -137,7 +137,7 @@ spec:
 
   ## Define configuration for Console (Graphical user interface for MinIO)
   console:
-    image: minio/console:v0.3.4
+    image: minio/console:v0.3.9
     replicas: 1
     consoleSecret:
       name: console-secret

@harshavardhana
Copy link
Member

However, the console pod is still untouched after 10 minutes. I was trying to update both images as in:

Not sure why that is not updated, it may be a bug // cc @dvaldivia @nitisht

@nitisht
Copy link
Contributor

nitisht commented Aug 10, 2020

However, the console pod is still untouched after 10 minutes. I was trying to update both images as in:

Not sure why that is not updated, it may be a bug // cc @dvaldivia @nitisht

Yes this is a bug, fixing

@immanuelfodor
Copy link
Author

Console image update now works with the latest release, thank you for the fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants