Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

Project Octopus

Abrar Shivani edited this page Nov 14, 2017 · 5 revisions

Project details

vSphere cloud provider in Kubernetes is designed to work only if all the nodes of the cluster are in one single datacenter folder. This is a hard restriction that makes the cluster not span across different folders/datacenter/VCs. You may want to span the cluster across datacenters/VCs for HA.

Please see https://github.com/vmware/kubernetes/issues/255 for more details

Deployment details

Using hyperkube image hosted on the cnastorage docker-hub deploy kubernetes cluster

Docker-hub: "docker.io/cnastorage/hyperkube-amd64: v1.9.0-alpha"

CNA storage team is working to fix kubeadm which will ease out K8S deployment with custom hyperkube image and VCP enabled from start. For now below manual steps are needed.

Steps to enable VCP (Project Octopus) on already deployed K8S cluster

Assumptions User has deployed k8s cluster using any (preferably using kubeadm) deployment method with above hyperkube image

Step-1 Create VCP user and vSphere entities

https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html#create-roles-add-privileges-to-roles-and-assign-them-to-the-vsphere-cloud-provider-user-and-vsphere-entities

In addition/substitution we would need the following privileges:

  • Read access on parent entities of the node VMs like folder, host, datacenter etc.
  • VirtualMachine.Inventory.Create/Delete on the working directory. Used to create/delete dummy VMs.

Step-2 Create vSphere.conf file on master node

Create file /etc/kubernetes/vsphere.conf on master node Below is the sample configuration file

[Global]
        user = "administrator@vsphere.local"
        password = "Admin!23"
        port = "443"
        insecure-flag = "1"
        datacenters = "vcqaDC"

[VirtualCenter "10.161.83.92"]
[VirtualCenter "10.161.86.47"]

[Workspace]
        server = "10.161.83.92"
        datacenter = "vcqaDC"
        folder = "kubernetes"
        default-datastore = "sharedVmfs-2"
        resourcepool-path = "cls"

[Disk]
        scsicontrollertype = pvscsi

[Network]
        public-network = "VM Network"

Step-3 Update Master node component configs Add following flags to kubelet running on master node, controller-manager and API server pods manifest files.

--cloud-provider=vsphere --cloud-config=/etc/kubernetes/vsphere.conf

Reload kubelet systemd unit file using systemctl daemon-reload Stop kubelet service using systemctl stop kubelet.service Kill all the kubernetes system containers
Start kubelet service using systemctl start kubelet.service

Step-4 Update kubelet service config file on worker nodes Add following flag to kubelet running on nodes

--cloud-provider=vsphere

Reload kubelet systemd unit file using **systemctl daemon-reload**
Restart kubelet service using **systemctl restart kubelet.service**

Note: cloud provider config file is not needed on worker nodes, but worker node needs to be aware of the cloudprovider. That is why only provider flag is added but not config file flag.

VCP with Project Octopus changes should now be running