-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Operator can't update Minio and Console images #241
Comments
Thanks for reporting @immanuelfodor we'll take a look |
FWIW @immanuelfodor using NFS as the backend drives for MinIO is not recommended and supported. You should bring regular drives. NFS doesn't provide the right consistency guarantees which disk filesystems can provide so you are prone to unexpected bugs which are a fact if you using NFS let alone use it in distributed mode. |
I tried reproducing but couldn't
NOTE: I am using our own diff --git a/examples/tenant.yaml b/examples/tenant.yaml
index 4f3ad14..fd3dc80 100644
--- a/examples/tenant.yaml
+++ b/examples/tenant.yaml
@@ -27,7 +27,7 @@ spec:
prometheus.io/scrape: "true"
## Registry location and Tag to download MinIO Server image
- image: minio/minio:RELEASE.2020-08-05T21-34-13Z
+ image: minio/minio:RELEASE.2020-08-07T01-23-07Z
zones:
- servers: 4
volumesPerServer: 4
@@ -39,9 +39,9 @@ spec:
- ReadWriteOnce
resources:
requests:
- storage: 1Ti
+ storage: 10Gi
# if you have direct-csi installed https://github.com/minio/direct-csi
- # storageClassName: direct.csi.min.io
+ storageClassName: direct.csi.min.io
## Mount path where PV will be mounted inside container(s). Defaults to "/export".
mountPath: /export
## Sub path inside Mount path where MinIO starts. Defaults to "". |
Hmm, so you think the problem lies at around using NFS in the first place, interesting. So if I replace it with direct-csi, it could apply the change instantly? I saw the direct-csi remark in the readme but as the NFS provisioning was already in place, I went for using that instead of setting up something yet unknown. This is a 3-node k8s cluster running on 3 identical VMs on NVMe, could I use the VMs filesystem with direct-csi or it is a must to use a/some raw block device? |
I don't think it is related to CSI - that is a general recommendation on why NFS should be avoided, MinIO over NAS is not a good idea unless you are using
Are you saying that updates were never really applied? @immanuelfodor |
Oh, thanks, I was not familiar with that command. However, I feel I couldn't start the pods this way ( Yes, it has never applied it, the pods are untouched. If I delete the operator pod, and it's recreated by k8s, the pod output hangs at the first Attempting... line and never produces another log line.
|
Well, it has just produced some new error lines, then hanged again at the next attempt:
|
There are fundamental problems that are happening on the system - most probably this is because I think your usage of NFS @immanuelfodor - looks like the local FS for the container is not able to handle the even simple update concurrency requirements. |
Now I'm fully convinced to switch to |
Trying out direct-csi, I've bumped into minio/directpv#11 as the https://github.com/minio/operator/blob/master/docs/using-direct-csi.md page doesn't mention the new |
Understand it was an issue with the master branch. Moved the offending commit to dev branch. |
Yes, the direct-csi apply is fixed (thank you for that), my comment above is about adding the |
In the meantime, somebody has created the PR for it: #244 |
I could finally try out the operator with the
So it was indeed an NFS-related problem. Maybe a huge warning in the readme would be great to avoid using NFS as storage driver. However, the console pod is still untouched after 10 minutes. I was trying to update both images as in: diff --git a/k8s/operators/minio/minio-tenant-console.yml b/k8s/operators/minio/minio-tenant-console.yml
index 3435c52..f7f3826 100644
--- a/k8s/operators/minio/minio-tenant-console.yml
+++ b/k8s/operators/minio/minio-tenant-console.yml
@@ -104,7 +104,7 @@ spec:
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
## Registry location and Tag to download MinIO Server image
- image: minio/minio:RELEASE.2020-07-31T03-39-05Z
+ image: minio/minio:RELEASE.2020-08-07T01-23-07Z
## Secret with credentials to be used by MinIO instance.
credsSecret:
name: minio-creds-secret
@@ -137,7 +137,7 @@ spec:
## Define configuration for Console (Graphical user interface for MinIO)
console:
- image: minio/console:v0.3.4
+ image: minio/console:v0.3.9
replicas: 1
consoleSecret:
name: console-secret |
Not sure why that is not updated, it may be a bug // cc @dvaldivia @nitisht |
Yes this is a bug, fixing |
Console image update now works with the latest release, thank you for the fix! |
Expected Behavior
Upon changing the Minio and Console image tags in the Tenant specification, the operator updates these instances automatically.
Current Behavior
The operator pod log says updating the image is unsuccessful and changing this field is forbidden.
Possible Solution
Steps to Reproduce (for bugs)
Context
I'd like to make the following change to the Tenant spec, then apply it to the cluster:
The operator says:
Regression
Operator version v3.0.10 but the problem existed before this release. Until no data was stored in the cluster, I'd delete the Tenant deployment, then recreate it from scratch. However, this method assigns new storage to the Minio pods as I use a dynamically provisioned storage class (NFS client provisioner). Additionally, tearing down Minio then recreating the cluster causes downtime, it would be great if the operator could handle it smoothly.
Your Environment
minio-operator
): v3.0.10uname -a
): Linux rke-node1 4.18.0-147.8.1.el8_1.x86_64 Add Minio operator #1 SMP Thu Apr 9 13:49:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux (CentOS 8)The text was updated successfully, but these errors were encountered: