Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the "generic" driver, for user provided VMs #4733

Closed
afbjorklund opened this issue Jul 10, 2019 · 15 comments · Fixed by #10099
Closed

Support the "generic" driver, for user provided VMs #4733

afbjorklund opened this issue Jul 10, 2019 · 15 comments · Fixed by #10099
Assignees
Labels
co/generic-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 10, 2019

Some users don't want to use minikube to set up their virtual machine,
but prefer to provide their own pre-installed VM (with a standard distro).

We should support this scenario, and offer to install Kubernetes on it...
This way they can still get started locally, and also with a physical server.

These parameters are required: (defaults)

  • IP Address
  • SSH User (root)
  • SSH Key
  • SSH Port (22)

The provided user is supposed to have sudo rights, and all OS requirements.

User does not have to install docker or kubeadm, this will be done by "start":

The distribution should be one of the ones supported by machine/kubeadm:

  • Ubuntu 16.04+
  • Debian 9
  • CentOS 7+
  • RHEL 7

The server also needs to meet the minimum HW requirements (2 CPUs, 2000M Memory)

See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/


Update: For now I think that Ubuntu 16.04 and CentOS 7 will be plenty, one deb and one rpm ?

They are also supported by all tools, and are the same platforms that kubectl/kubeadm targets:

"xenial" = Ubuntu 16.04

deb https://apt.kubernetes.io/ kubernetes-xenial main

"el7" = RHEL/CentOS 7

https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

Other Linux distributions could work, but is not something that I want to support downstream.

Eventually we might support Ubuntu 18.04 and CentOS 8, but probably not until later (2020?)

Note that Fedora 28 could be used for testing, until CentOS 8 has been released. "Close enough".
(See https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Relationship_with_Fedora for details)

  • Ubuntu 16.04 LTS (xenial)
  • Ubuntu 18.04 LTS (bionic)
  • CentOS 7 / Fedora 18
  • CentOS 8 / Fedora 28
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jul 11, 2019

The main use case is when the user's development machine is unable to run the virtual machines...
Either because it is already virtualized (without nested capability), or because it is out of resources.

As described in #4730, one alternative is to run the "none" driver manually on the remote server.
But that leaves the user with having to set up the ssh keys and set up the kubectl configuration.

In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your control-plane node to your workstation like this:

  scp root@<master ip>:/etc/kubernetes/admin.conf .
  kubectl --kubeconfig ./admin.conf get nodes

If we ran the installation over ssh (just like we do with the supported VM drivers) instead of locally,
then we could also do the normal configuration of the local environment so that it "just works".

Something like:

$ minikube start --vm-driver generic --generic-ip-address 192.168.99.100 \
                 --generic-ssh-user root --generic-ssh-key $HOME/.ssh/id_rsa

Other than that, it would be pretty much the same as "none" - i.e. just using the provided host.
So the linux distribution and hardware specifications are what they are, nothing from minikube.

What I don't want to do is interfacing with the dozen of cloud providers that machine does.
If your proper cloud provider does not support Kubernetes today, maybe it is time to switch ?

@afbjorklund
Copy link
Collaborator Author

I'm using Vagrant for testing the concept, but will do an example using cloud-init as well.

vagrant ssh-config (normally this will generate a key, later on)

Host default
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/anders/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL

https://cloudinit.readthedocs.io/en/latest/topics/availability.html

#cloud-config
password: passw0rd
chpasswd: {expire: False}
ssh_pwauth: False
ssh_authorized_keys:
  - ssh-rsa AAA...SDvZ user1@domain.com

@afbjorklund afbjorklund self-assigned this Jul 13, 2019
@afbjorklund
Copy link
Collaborator Author

Note to self: when using vagrant to test, need to create a private network - 127.0.0.1 does not work.

  config.vm.network "private_network", type: "dhcp"

Then give the other IP to minikube, i.e. not the first one ending up at 10.0.2.15 (and used for SSH)

enp0s3    Link encap:Ethernet  HWaddr 02:26:20:ce:de:47  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::26:20ff:fece:de47/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:187617 errors:0 dropped:0 overruns:0 frame:0
          TX packets:54817 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:269903531 (269.9 MB)  TX bytes:3604630 (3.6 MB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:7a:3c:ea  
          inet addr:172.28.128.4  Bcast:172.28.128.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe7a:3cea/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:214187 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11228 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:317792858 (317.7 MB)  TX bytes:1647745 (1.6 MB)

@afbjorklund
Copy link
Collaborator Author

Seems to be working OK for Ubuntu now. Only supports Docker though, should warn about that.

@tstromberg tstromberg added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jul 16, 2019
@afbjorklund
Copy link
Collaborator Author

Need to test on CentOS as well (should work), and warn when using 127.0.0.1 as the IP address.

@afbjorklund afbjorklund added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 17, 2019
@afbjorklund
Copy link
Collaborator Author

Docker

As mentioned briefly above, the Docker installation is done using a single script:

const DefaultEngineInstallURL = "https://get.docker.com"

func installDockerGeneric(p Provisioner, baseURL string) error {
	// install docker - until cloudinit we use ubuntu everywhere so we
	// just install it using the docker repos
	if output, err := p.SSHCommand(fmt.Sprintf("if ! type docker; then curl -sSL %s | sh -; fi", baseURL)); err != nil {
		return fmt.Errorf("error installing docker: %s", output)
	}

	return nil
}

That is: https://github.com/docker/docker-install/blob/master/install.sh

This will try to install using deb/rpm packages from https://download.docker.com

It is similar to the manual installation process, as detailed in the docs:

Packages:

  • docker-ce
  • docker-ce-cli
  • containerd.io

Kubernetes

Currently we don't have a matching script for kubeadm, only manual:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Ubuntu, Debian:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

CentOS, Fedora:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Packages:

  • kubeadm
  • kubelet
  • kubectl

Kubernetes also does some special configuration of Docker (not in above):

https://kubernetes.io/docs/setup/production-environment/container-runtimes/

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

The current workaround is to use the static binaries always (not packages).

// GetKubernetesReleaseURL gets the location of a kubernetes client
func GetKubernetesReleaseURL(binaryName, version, osName, archName string) string {
	return fmt.Sprintf("https://storage.googleapis.com/kubernetes-release/release/%s/bin/%s/%s/%s", version, osName, archName, binaryName)
}

And to not do any configuration of Docker, but run with the "cgroupfs" driver.


Eventually it could be nice to use the official packages for the supported distros ?
But right now it is not important, since they have the same binary inside them anyway...

And minikube wants to control the version of kubernetes that it installs on each VM.
So we let libmachine handle docker provisioning, and bootstrapper handle kubernetes.

Handling CRI-O etc. is going to be "interesting" (PPA etc), so that feature will come later.
Basically it's the same overall concept as above, but not included in (docker) machine.

The generic driver is otherwise very similar to the none driver, except that it provides ssh.
While the "none" driver only has a docker host, the "generic" driver does have a real host.

@tstromberg
Copy link
Contributor

Based on the description & requirements, this should probably have "ssh" or "remote" in the driver name. Both because it requires ssh, and to disambiguate it from "none" (which is terribly named)

@afbjorklund
Copy link
Collaborator Author

I don't have any plans to rename the drivers, as long as we are using docker machine (they named them a long time ago). I think calling it "remote" is fine, since native ssh is included in the package ?

@afbjorklund
Copy link
Collaborator Author

Note that none is named that way because it doesn't have a docker machine connected to it (at all), only a docker host (a URL). It is minikube that has chosen to interpret this machine driver as "local".

@afbjorklund
Copy link
Collaborator Author

Trying to come up with a matching script for CRI-O (and CRI) is proving to be troublesome.
The current packages all have known issues, so you are usually left building from source...

Most likely there should also be an option to use the generic static binaries from the tarball.
This is how we are currently installing docker, the others we are now building from source.

  1. https://download.docker.com/linux/static/stable/x86_64/

  2. https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso/package/

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 1, 2020
@afbjorklund
Copy link
Collaborator Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 3, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 2, 2020
@tstromberg tstromberg added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jun 24, 2020
@tstromberg
Copy link
Contributor

This is still something I would be interested in seeing in the long-term.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/generic-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants