Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: First path segment in URL cannot contain colon #2281

Open
stephan2012 opened this issue Aug 19, 2024 · 7 comments
Open

Error: First path segment in URL cannot contain colon #2281

stephan2012 opened this issue Aug 19, 2024 · 7 comments
Assignees

Comments

@stephan2012
Copy link

Expected Behavior

After upgrading the Operator from v5.x to v6.0.2, the Tenant yields an error on startup:

$ kubectl logs -n minio-tenant minio-tenant-pool-0-0 
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
{"level":"FATAL","time":"2024-08-19T19:49:23.918222436Z","message":"parse \"'https://s3-console.lab2.company.com/'\": first path segment in URL cannot contain colon","error":{"message":"parse \"'https://s3-console.lab2.company.com/'\": first path segment in URL cannot contain colon","source":["cmd/server-main.go:788:cmd.serverMain()"]}}

Changing MINIO_BROWSER_REDIRECT_URL from https://s3-console.lab2.company.com/ to s3-console.lab2.company.com/ does not help:

$ kubectl logs -n minio-tenant minio-tenant-pool-0-0 
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
{"level":"FATAL","time":"2024-08-19T19:51:30.623246278Z","message":"unexpected scheme found ","error":{"message":"unexpected scheme found ","source":["cmd/server-main.go:788:cmd.serverMain()"]}}

Current Behavior

See above.

Possible Solution

Fix URL parsing.

Steps to Reproduce (for bugs)

Not sure but possibly it is enough to set MINIO_BROWSER_REDIRECT_URL to a full URL.

Context

Upgrading from v5.x to v6.0.2.

Regression

Not sure.

Your Environment

  • Version used (minio-operator): v6.0.2
  • Environment name and version (e.g. kubernetes v1.17.2): v1.29.7
  • Server type and version: Debian GNU/Linux 12 (Bookworm)
  • Operating System and version (uname -a): Linux n0251 6.9.7+bpo-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.9.7-1~bpo12+1 (2024-07-03) x86_64 GNU/Linux
  • Link to your deployment file: N/A
@ramondeklein
Copy link
Contributor

ramondeklein commented Aug 20, 2024

Can you post your full Tenant resource? Of course, you can hide some secrets. I also would like to know the MinIO version that you are using. It looks like the MINIO_BROWSER_REDIRECT_URL is parsed in cmd/common-main.go:707:

if redirectURL := env.Get(config.EnvBrowserRedirectURL, ""); redirectURL != "" {
    u, err := xnet.ParseHTTPURL(redirectURL)
    if err != nil {
        logger.Fatal(err, "Invalid MINIO_BROWSER_REDIRECT_URL value in environment variable")
    }

However that should return another error message.

@stephan2012
Copy link
Author

Here's the output of kubectl get tenants.minio.min.io -n minio-tenant minio-tenant -o yaml:

apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  annotations:
    meta.helm.sh/release-name: minio-tenant
    meta.helm.sh/release-namespace: minio-tenant
    prometheus.io/path: /minio/v2/metrics/cluster
    prometheus.io/port: "9000"
    prometheus.io/scheme: http
    prometheus.io/scrape: "true"
  creationTimestamp: "2024-04-24T12:54:58Z"
  generation: 7
  labels:
    app: minio
    app.kubernetes.io/managed-by: Helm
  name: minio-tenant
  namespace: minio-tenant
  resourceVersion: "307585345"
  uid: 4f454e99-8a8e-4297-9d31-3ad00004ca02
spec:
  buckets:
  - name: lab2-backup
  configuration:
    name: minio-tenant-config
  features:
    bucketDNS: true
    enableSFTP: false
  image: repo.lab2.int.company.com:50443/quay.io/minio/minio:RELEASE.2024-08-17T01-24-54Z
  imagePullPolicy: IfNotPresent
  logging:
    anonymous: false
    json: true
    quiet: true
  mountPath: /export
  podManagementPolicy: Parallel
  pools:
  - containerSecurityContext:
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    name: pool-0
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 1
    volumeClaimTemplate:
      metadata:
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 500Gi
        storageClassName: rook-cephfs
    volumesPerServer: 1
  prometheusOperator: false
  requestAutoCert: false
  subPath: /data
status:
  availableReplicas: 0
  certificates:
    autoCertEnabled: false
    customCertificates: {}
  currentState: Provisioning default buckets
  drivesOnline: 1
  healthStatus: green
  pools:
  - legacySecurityContext: false
    ssName: minio-tenant-pool-0
    state: PoolInitialized
  provisionedBuckets: true
  revision: 0
  syncVersion: v6.0.0
  usage:
    capacity: 531862913024
    rawCapacity: 536870912000
    rawUsage: 5007998976
    usage: 5007998976
  writeQuorum: 1

And here's the content of the config secret (kubectl get secret -n minio-tenant minio-tenant-config -o jsonpath='{.data.config\.env}' | base64 -d):

export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export MINIO_BROWSER_REDIRECT_URL='https://s3-console.lab2.int.company.com/'
export MINIO_DOMAIN='s3.lab2.int.company.com'
export TZ='Europe/Berlin'

MinIO version is RELEASE.2024-08-17T01-24-54Z.

Please let me know if you need more details.

@ramondeklein
Copy link
Contributor

ramondeklein commented Aug 20, 2024

Thanks, I can reproduce it when moving MINIO_BROWSER_REDIRECT_URL and MINIO_DOMAIN to the configuration secret. I had these values in .tenant.spec.env and then the problem doesn't seem to occur. When I ssh into the pod, then I see this:

$ cat /tmp/minio/config.env
export MINIO_ARGS="https://my-minio-pool-0-{0...3}.my-minio-hl.my-minio.svc.cluster.local/export{0...3}/data"
export MINIO_BROWSER_REDIRECT_URL="'https://s3-console.lab2.int.company.com/'"
export MINIO_DOMAIN="'s3.lab2.int.company.com'"
export MINIO_PROMETHEUS_JOB_ID="minio-job"
export MINIO_ROOT_PASSWORD="minio123"
export MINIO_ROOT_USER="minio"
export MINIO_SERVER_URL="https://minio.my-minio.svc.cluster.local:443"
export MINIO_UPDATE="on"
export MINIO_UPDATE_MINISIGN_PUBKEY="RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav"
export TZ="'Europe/Berlin'"

It looks like it has extra quotes that you put into the secret. I guess we should be able to deal with that. For now, you could remove the quotes from the three variables. Moving the variables to .tenant.spec.env of the Tenant resource may be an ever better option.

@ramondeklein
Copy link
Contributor

We should fix parsing the configuration secret, so it can deal with quotes properly. User's expect that this file is parsed, like sh would parse it, so it should be able to handle basic quoting. I don't think we need to really parse it like sh does.

@stephan2012
Copy link
Author

The comments in the Helm values file mention the explicit use of the export statement, suggesting it is a shell snippet:

  # The values should be a series of export statements to set environment variables for the Tenant.

and a few lines below:

  # For example:
  #
  # .. code-block:: shell
  #
  #    stringData:
  #       config.env: |-
  #         export MINIO_ROOT_USER=ROOTUSERNAME
  #         export MINIO_ROOT_PASSWORD=ROOTUSERPASSWORD

However, why did it work with v5?

@stephan2012
Copy link
Author

For now, you could remove the quotes from the three variables.

Works!

Thanks for your support here.

@ramondeklein
Copy link
Contributor

ramondeklein commented Aug 20, 2024

However, why did it work with v5?

Operator and sidecar have changed significantly in v6. It's aimed to cause fewer restarts of the pod (that introduces down-time), Previously most environment variables where bound to the container. This is how it would look like in v5:
image

Now the variables are written to /tmp/minio/config.env, so it only needs to update this file and not restart all the pods if something is updated. But that change also comes with some new bugs (unfortunately).

Previously Kubernetes did all the hard work for parsing. Now we need to do it. That's also explains why #2279 worked in v5, but fails in v6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants