-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support the "generic" driver, for user provided VMs #4733
Comments
The main use case is when the user's development machine is unable to run the virtual machines... As described in #4730, one alternative is to run the "none" driver manually on the remote server.
If we ran the installation over ssh (just like we do with the supported VM drivers) instead of locally, Something like:
Other than that, it would be pretty much the same as "none" - i.e. just using the provided host. What I don't want to do is interfacing with the dozen of cloud providers that |
I'm using Vagrant for testing the concept, but will do an example using
https://cloudinit.readthedocs.io/en/latest/topics/availability.html #cloud-config
password: passw0rd
chpasswd: {expire: False}
ssh_pwauth: False
ssh_authorized_keys:
- ssh-rsa AAA...SDvZ user1@domain.com |
Note to self: when using vagrant to test, need to create a private network - 127.0.0.1 does not work. config.vm.network "private_network", type: "dhcp" Then give the other IP to minikube, i.e. not the first one ending up at 10.0.2.15 (and used for SSH)
|
Seems to be working OK for Ubuntu now. Only supports Docker though, should warn about that. |
Need to test on CentOS as well (should work), and warn when using 127.0.0.1 as the IP address. |
DockerAs mentioned briefly above, the Docker installation is done using a single script: const DefaultEngineInstallURL = "https://get.docker.com"
func installDockerGeneric(p Provisioner, baseURL string) error {
// install docker - until cloudinit we use ubuntu everywhere so we
// just install it using the docker repos
if output, err := p.SSHCommand(fmt.Sprintf("if ! type docker; then curl -sSL %s | sh -; fi", baseURL)); err != nil {
return fmt.Errorf("error installing docker: %s", output)
}
return nil
} That is: https://github.com/docker/docker-install/blob/master/install.sh This will try to install using deb/rpm packages from https://download.docker.com It is similar to the manual installation process, as detailed in the docs:
Packages:
KubernetesCurrently we don't have a matching script for kubeadm, only manual: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ Ubuntu, Debian: curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF CentOS, Fedora: cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF Packages:
Kubernetes also does some special configuration of Docker (not in above): https://kubernetes.io/docs/setup/production-environment/container-runtimes/ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF The current workaround is to use the static binaries always (not packages). // GetKubernetesReleaseURL gets the location of a kubernetes client
func GetKubernetesReleaseURL(binaryName, version, osName, archName string) string {
return fmt.Sprintf("https://storage.googleapis.com/kubernetes-release/release/%s/bin/%s/%s/%s", version, osName, archName, binaryName)
} And to not do any configuration of Docker, but run with the "cgroupfs" driver. Eventually it could be nice to use the official packages for the supported distros ? And minikube wants to control the version of kubernetes that it installs on each VM. Handling CRI-O etc. is going to be "interesting" (PPA etc), so that feature will come later. The |
Based on the description & requirements, this should probably have "ssh" or "remote" in the driver name. Both because it requires ssh, and to disambiguate it from "none" (which is terribly named) |
I don't have any plans to rename the drivers, as long as we are using docker machine (they named them a long time ago). I think calling it "remote" is fine, since native ssh is included in the package ? |
Note that |
Trying to come up with a matching script for CRI-O (and CRI) is proving to be troublesome. Most likely there should also be an option to use the generic static binaries from the tarball. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is still something I would be interested in seeing in the long-term. |
Some users don't want to use minikube to set up their virtual machine,
but prefer to provide their own pre-installed VM (with a standard distro).
We should support this scenario, and offer to install Kubernetes on it...
This way they can still get started locally, and also with a physical server.
These parameters are required: (defaults)
The provided user is supposed to have sudo rights, and all OS requirements.
User does not have to install docker or kubeadm, this will be done by "start":
https://github.com/docker/docker-install
https://github.com/kubernetes/kubernetes
The distribution should be one of the ones supported by machine/kubeadm:
The server also needs to meet the minimum HW requirements (2 CPUs, 2000M Memory)
See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Update: For now I think that Ubuntu 16.04 and CentOS 7 will be plenty, one deb and one rpm ?
They are also supported by all tools, and are the same platforms that kubectl/kubeadm targets:
"xenial" = Ubuntu 16.04
"el7" = RHEL/CentOS 7
Other Linux distributions could work, but is not something that I want to support downstream.
Eventually we might support Ubuntu 18.04 and CentOS 8, but probably not until later (2020?)
Note that Fedora 28 could be used for testing, until CentOS 8 has been released. "Close enough".(See https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Relationship_with_Fedora for details)
The text was updated successfully, but these errors were encountered: