Our servers IPS and names, just for reference:
192.168.0.10 control01 control01.local 192.168.0.11 cube01 cube01.local 192.168.0.12 cube02 cube02.local 192.168.0.13 cube03 cube03.local 192.168.0.14 cube04 cube04.local 192.168.0.15 cube05 cube05.local 192.168.0.16 cube06 cube06.local 192.168.0.17 cube07 cube07.local
Master / Control
In our case: control01
This is our primary node.
We are going to install the K3s version of Kubernetes, that is lightweight enough for out single board computers to handle. Use the following command to download and initialize K3s’ master node.
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --disable servicelb --token some_random_password --node-taint CriticalAddonsOnly=true:NoExecute --bind-address 192.168.0.10 --disable-cloud-controller --disable local-storage
- --write-kubeconfig-mode 644 - This is the mode that we want to use for the kubeconfig file. Its optional, but needed if you want to connect to Rancher manager later on.
- --disable servicelb - This is the flag that we want to use to disable the service load balancer. (We will use metallb instead)
- --token - This is the token that we want to use to connect to the K3s master node. Choose a random password, but keep remember it.
- --node-taint - This is the flag that we want to use to add a taint to the K3s master node. I'll explain taints later on, but it will mark the node to not run any containers except the ones that are critical.
- --bind-address - This is the flag that we want to use to bind the K3s master node to a specific IP address.
- --disable-cloud-controller - This is the flag that we want to use to disable the K3s cloud controller. I don't think I need it.
- --disable local-storage - This is the flag that we want to use to disable the K3s local storage. I'm going to setup longhorn storage provider instead.
We can look at Kubernetes nodes by using the following command:
root@control01:~# kubectl get nodes NAME STATUS ROLES AGE VERSION control01 Ready control-plane,master 23s v1.23.6+k3s1
We need to join some workers now; in our case cube01 to 07. Furthermore, we are going to execute the join command on each node with Ansible.
We are not moving away from master node, we are doing everything from there.
ansible workers -b -m shell -a "curl -sfL https://get.k3s.io | K3S_URL=https://192.168.0.10:6443 K3S_TOKEN=some_random_password sh -"
Now give it few moments to join the cluster. You can watch the progress by using the following command:
watch kubectl get nodes # to quit watch use Ctrl+C
In the end it should look like this:
root@control01:~# kubectl get nodes NAME STATUS ROLES AGE VERSION cube03 Ready <none> 71s v1.23.6+k3s1 cube04 Ready <none> 72s v1.23.6+k3s1 cube02 Ready <none> 61s v1.23.6+k3s1 cube01 Ready <none> 59s v1.23.6+k3s1 cube05 Ready <none> 56s v1.23.6+k3s1 control01 Ready control-plane,master 3m45s v1.23.6+k3s1 cube07 Ready <none> 38s v1.23.6+k3s1 cube06 Ready <none> 31s v1.23.6+k3s1
The displayed order does not matter.
tag our cluster nodes to give them labels.
k3s by default allow pods to run on the control plane, which can be OK, but in production it would not. However, in our case, we already tagged the master node when we installed the K3s. I still want to control a bit more where workload is deployed, and it's also good to know how it's done. So, We will be using labels to tell pods/deployment where to run.
Let's add this tag key:value:
kubernetes.io/role=worker to worker nodes. This is more cosmetic, to have nice output from
kubectl get nodes.
kubectl label nodes cube01 kubernetes.io/role=worker kubectl label nodes cube02 kubernetes.io/role=worker kubectl label nodes cube03 kubernetes.io/role=worker kubectl label nodes cube04 kubernetes.io/role=worker kubectl label nodes cube05 kubernetes.io/role=worker kubectl label nodes cube06 kubernetes.io/role=worker kubectl label nodes cube07 kubernetes.io/role=worker
Another label/tag. I will use this one to tell deployments to prefer nodes where
workers. The node-type is our chosen name for key, you can call it whatever.
kubectl label nodes cube01 node-type=worker kubectl label nodes cube02 node-type=worker kubectl label nodes cube03 node-type=worker kubectl label nodes cube04 node-type=worker kubectl label nodes cube05 node-type=worker kubectl label nodes cube06 node-type=worker kubectl label nodes cube07 node-type=worker
Whole Kubernetes cluster:
root@control01:~# kubectl get nodes NAME STATUS ROLES AGE VERSION cube02 Ready worker 15d v1.23.6+k3s1 cube01 Ready worker 15d v1.23.6+k3s1 cube03 Ready worker 15d v1.23.6+k3s1 cube07 Ready worker 15d v1.23.6+k3s1 cube04 Ready worker 15d v1.23.6+k3s1 cube06 Ready worker 15d v1.23.6+k3s1 cube05 Ready worker 15d v1.23.6+k3s1 control01 Ready control-plane,master 15d v1.23.6+k3s1
You can also use
kubectl get nodes --show-labels to show all labels for nodes.
And for taints we can use following to show all taints for nodes:
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers
root@control01:~# kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers cube03 <none> cube04 <none> cube07 <none> cube02 <none> cube01 <none> cube06 <none> cube05 <none> control01 [map[effect:NoExecute key:CriticalAddonsOnly value:true]]
Lastly, add following into /etc/environment (this is so the Helm and other programs know where the Kubernetes config is.)
On every node:
echo "KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> /etc/environment
Or use Ansible:
ansible cube -b -m lineinfile -a "path='/etc/environment' line='KUBECONFIG=/etc/rancher/k3s/k3s.yaml'"
There are other options to deploy k3s. For example, Ansible can deploy everything (that might not end up the same as mine); for inspiration check out thing git repo: https://github.com/k3s-io/k3s-ansible
Another solution could be to do GitOps, and make the infrastructure as a code using Flux 2 (or alternative). I might do a separate article on how to set this up, but you can have look here in mean time: https://github.com/k8s-at-home/awesome-home-kubernetes for some more inspiration 🙂