Project about simple Kubernetes clusters on ARM nodes.
- Specifications
- Installing Operating Systems
- Deploying K3S with help of Ansible
- How to enable Kubernetes Dashboard
- Video blog about this project
- Links
Communication between nodes and controller:
- 1x Ethernet cable (with four copper pairs)
- 1x USB Type-C cable
- 1x Power supply adapter 5V/2A for USB devices
- 1x Raspberry Pi 4B (CPU: quad-core, RAM: 8Gb)
- 1x MicroSD card 32Gb (by Samsung, the EVO series)
- 4x Ethernet cable (with four copper pairs)
- 4x USB Type-C cable
- 4x Power supply adapter 5V/2A for USB devices
- 4x NanoPi NEO3 (CPU: quad-core, RAM: 2Gb)
- 4x MicroSD card 32Gb (class 10)
- 2x Ethernet cable (with four copper pairs)
- 1x Power supply adapter 12V/5A for TuringPi board
- 1x TuringPi V1 cluster board
- 7x Raspberry Pi CM3 Lite (CPU: quad-core, RAM: 1Gb)
- 7x MicroSD card 32Gb (class 10)
- Operating Systems on all devices in Ubuntu 20.04 LTS (or recent) for ARM64 CPUs;
- On controller should be installed
k3s server
; - On all nodes should be installed
k3s agent
; - All agents and controller should work from inside Docker container, for example with help of docker-compose;
- Deployment of
docker-compose.yml
and other following files should be made via Ansible.
On MicroSD cards will be used Ubuntu 20.04 LTS Focal Fossa (or recent).
First download Armbian_21.08.1_Nanopineo3_focal_current_5.10.60.img.xz
archive from:
https://armbian.hosthatch.com/archive/nanopineo3/archive/
Then extract an archive:
xz -d Armbian_21.08.1_Nanopineo3_focal_current_5.10.60.img.xz
Then connect MicroSD card and check name of device:
~$ sudo fdisk -l | grep 'model: Micro' -A 10 -B 2
Disk /dev/sdX: 29,81 GiB, 32010928128 bytes, 62521344 sectors
Disk model: Micro SD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0000000
/dev/sdX
is a device name of connected Micro SD card, your probably will be different.
Flash the image to this card:
dd if=Armbian_21.08.1_Nanopineo3_focal_current_5.10.60.img of=/dev/sdX bs=1024k status=progress
Then wait some time.
By default, size of created partition is about 800Mb, try to use
gparted
for increasing space to 32Gb
After flashing, you may connect a card to NanoPi NEO3, then turn on the device.
In a few seconds device will receive and IP address if you have DHCP server in your network, try to connect via SSH.
- Username:
root
- Password:
1234
After first login NanoPi's prompt will ask you about a new password and some other things.
NanoPi is ready for usage, congrats :)
On MicroSD cards will be used Ubuntu 20.04 LTS Focal Fossa (or recent).
Need to install rpi-imager
tool:
https://github.com/raspberrypi/rpi-imager
sudo apt-get install rpi-imager
After need to run rpi-image
, choose Ubuntu Server from Other general purpose OS section and select USB flash drive.
You also may preconfigure names of hosts, default login/pass and SSH key in settings of rpi-imager
, for this just click to cog icon.
After you ready click to Write button and wait some time.
Here will be described part about installing additional tools to controller and nodes, plus about deploy of docker-compose configs to machines.
First need to install ansible tool via package manager:
sudo apt-get install ansible
All commands below will be executed in ansible
subfolder:
cd ansible
All hosts and groups described in inventory
file, copy from example, then change to your hosts:
cp inventory.dist inventory
mcedit inventory
Also need to set username with sudo permissions:
cp vars/user.dist.yml vars/user.yml
mcedit vars/user.yml
Don't forget to change path to id_rsa.pub
public key file.
playbook-default.yml
It's a default playbook, steps described in this file should be executed on all machines of cluster, required sudo privileges.
ansible-playbook -i inventory playbook-default.yml --ask-become-pass
playbook-controller.yml
K3S server is a core of our project, it will execute management operations.
Second service here is Rancher web-interface.
ansible-playbook -i inventory playbook-controller.yml
After executing this command API of Kubernetes cluster will be available on https://192.168.1.200:6443
playbook-node.yml
Now we just need add nodes to cluster controller, nodes will join to cluster automatically:
ansible-playbook -i inventory playbook-node.yml
Instruction related to your OS is here.
Need to change directory to ~/.kube
and download kubeconfig from k8s-controller:
[ -d ~/.kube ] || mkdir ~/.kube
cd ~/.kube
cp config config.bak
scp k8s-controller:/home/pasha/k3s-controller/output/kubeconfig.yaml config
Test installation:
kubectl get nodes
In result will be something like this:
NAME STATUS ROLES AGE VERSION
k8s-node4 Ready <none> 31m v1.22.11+k3s2
k8s-node2 Ready <none> 31m v1.22.11+k3s2
k8s-node3 Ready <none> 31m v1.22.11+k3s2
k8s-node1 Ready <none> 31m v1.22.11+k3s2
k8s-node6 Ready <none> 31m v1.22.11+k3s2
k8s-node7 Ready <none> 30m v1.22.11+k3s2
k8s-node5 Ready <none> 31m v1.22.11+k3s2
k8s-controller Ready control-plane,master 32m v1.22.11+k3s2
Go to folder with dashboard installation scripts:
cd k8s-dashboard
Run install.sh
then wait couple minutes, after script is done run following command:
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard 9090:80
Then open http://localhost:9090 in browser.
All videos on Russian language.
- Introduction and technical description
- Installing operating systems
- Deploy K3S with help of Ansible
- Moving controller to Raspberry Pi 4B
- Kubernetes Dashboard and GoCD server with agents in Kubernetes
- TBA...