Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Katakoda is using a very old minikube v1.18 #15097

Closed
medyagh opened this issue Oct 10, 2022 · 12 comments
Closed

Katakoda is using a very old minikube v1.18 #15097

medyagh opened this issue Oct 10, 2022 · 12 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@medyagh
Copy link
Member

medyagh commented Oct 10, 2022

What Happened?

the katakoda on Kubernetes website is using a very old minikube and it needs to be updated.
if anyone can give a hand in finding out how to update it, this task is available to be picked up

https://kubernetes.io/docs/tutorials/hello-minikube/

@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Oct 10, 2022
@afbjorklund
Copy link
Collaborator

@nikitar
Copy link

nikitar commented Dec 7, 2022

Would this be covered by #37817? I presume any k8s upgrade would also include a minikube upgrade.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 7, 2022

I presume any k8s upgrade would also include a minikube upgrade.

They are released independently, kubernetes doesn't even test with minikube (or with docker) anymore.

@nikitar
Copy link

nikitar commented Dec 7, 2022

What I meant is, since most k8s live tutorials use minikube as well, it'd make sense to upgrade them together in the katacoda image. (Assuming there's anyone at katacoda/oreilly who could still do the upgrade)

@afbjorklund
Copy link
Collaborator

Sure, it is the same task.

I still think it would be a good idea to have solution that could run both in a cloud provider as well as in your own laptop. There is also room for alternative scenarios such as multi-node, as a step two. Especially now when they can share one VM (running two node containers)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 12, 2022

Except for some quirks, this is an approximation of an updated web UI:

browser-minikube

As compared with the one that is currently in the release notes/tutorials:

hello-minikube

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 12, 2022

As mentioned before, it would be nice to get rid of all the misleading spam from the output:

❗  Using the 'containerd' runtime with the 'none' driver is an untested configuration!
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative

config.WantNoneDriverWarning


❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

redundant commands (HOME=/home/anders.linux)

❗  kubectl and minikube configuration will be stored in /home/anders.linux
❗  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /home/anders.linux/.kube /home/anders.linux/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true

One improvement over the previous "none" version is that it no longer needs to run as root.

The new version of minikube uses sudo where needed, instead of running everything as root.

Running with the "docker" driver is a more advanced scenario, but good for multi-node ?

Assuming that it is OK to overprovision, otherwise one might as well start up two node VMs.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 12, 2022

Here are the issues that you run into, when you try to run the "none" driver (in lima)

  • minikube_1.28.0-0_amd64.deb
❌  Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.25.3 requires conntrack to be installed in root's path

Fix: sudo apt install conntrack

🤦  StartHost failed, but will try again: creating host: create: precreate: exec: "docker": executable file not found in $PATH

Workaround: sudo ln -s nerdctl /usr/local/bin/docker

❌  Exiting due to RUNTIME_ENABLE: update sandbox_image: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml": exit status 2
stdout:

stderr:
sed: can't read /etc/containerd/config.toml: No such file or directory

Fix: sudo mkdir -p /etc/containerd && sudo containerd config dump | sudo tee /etc/containerd/config.toml

❌  Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found

Workaround: https://github.com/kubernetes-sigs/cri-tools/releases/tag/v1.25.0

  • crictl-v1.25.0-linux-amd64.tar.gz
💢  initialization failed, will try again: apply cni: cni apply: copy: chmod /etc/cni/net.d/1-k8s.conflist: permission denied

Fix: sudo chmod 755 /etc/cni /etc/cni/net.d

"RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec81ef612493011775ba2ec285fc375f79fa2e6671e8d6f2b6fcb47c5652130\": plugin type=\"loopback\" failed (add): failed to find plugin \"loopback\" in path [/opt/cni/bin]"

(there are some plugins for nerdctl, in /usr/local/libexec/cni)

Workaround: https://github.com/containernetworking/plugins/releases/tag/v1.1.1

  • cni-plugins-linux-amd64-v1.1.1.tgz

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 12, 2022

Happy face, with all issues above fixed:

$ limactl start
😄  minikube v1.28.0 on Ubuntu 22.04 (kvm/amd64)
✨  Using the none driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=4, Memory=3919MB, Disk=99053MB) ...
ℹ️  OS release is Ubuntu 22.04.1 LTS
📦  Preparing Kubernetes v1.25.3 on containerd 1.6.8 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🤹  Configuring local host environment ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Still need --preload=false, if not using ISO.

This would also be a nice improvement:

sudo ln -s minikube /usr/bin/kubectl


Using the docker runtime would look similar.

$ limactl start --name default template://docker-rootful
😄  minikube v1.28.0 on Ubuntu 22.04 (kvm/amd64)
✨  Using the none driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=4, Memory=3919MB, Disk=99053MB) ...
ℹ️  OS release is Ubuntu 22.04.1 LTS
🐳  Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🤹  Configuring local host environment ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
🐳  Exiting due to NOT_FOUND_CRI_DOCKERD: 

Workaround: https://github.com/Mirantis/cri-dockerd/releases/tag/v0.2.6

  • cri-dockerd_0.2.6.3-0.ubuntu-jammy_amd64.deb

@afbjorklund afbjorklund added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Dec 12, 2022
@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 30, 2022

I think that SIG Docs is still trying to find alternative hosting, before Katacoda shuts down (tomorrow)

It is not clear if there will be a new deployment of the old version (1.18), or of the new version (1.28) ?

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

There is support for Kubernetes in Killercoda, but I don't think that it is using minikube ? (just kubeadm)

It does upgrade the OS version from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS though, always something.

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:49:09Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 2, 2023

Made separate issues for the output spam:

See #15097 (comment)

It will still complain about "none", and about containerd.

But those can be worked around or fixed, and are not bugs.

❗ The 'none' driver is designed for experts who need to integrate with an existing VM

❗ Using the 'containerd' runtime with the 'none' driver is an untested configuration!

Especially ironic since containerd was the only runtime for 1.26.0

The support for Docker and for CRI-O was a bit late, and didn't arrive until some weeks later.

But it is true that it still buggy in minikube, due to it requiring a docker binary since forever.

@afbjorklund
Copy link
Collaborator

@afbjorklund afbjorklund removed the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Aug 10, 2024
@afbjorklund afbjorklund closed this as not planned Won't fix, can't repro, duplicate, stale Aug 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests

3 participants