Skip to content

Control plane

Our servers, just for reference:

ubuntu@control01:~$ cat /etc/hosts
127.0.0.1 localhost

192.168.0.101 control01 control01.local
192.168.0.109 control02 control02.local
192.168.0.108 control03 control03.local

192.168.0.102 cube01 cube01.local
192.168.0.103 cube02 cube01.local
192.168.0.104 cube03 cube01.local
192.168.0.105 cube04 cube01.local
192.168.0.106 cube05 cube01.local
192.168.0.107 cube06 cube01.local

Master 1

In our case: control01

This is our primary node, one of 3 controll nodes.

We are going to install K3s version after the K8s disaster 🙂 Use following command to download and initialize K3s master node. We pasted - server into the command to tell it that we will be adding another master nodes.

curl -sfL https://get.k3s.io | K3S_TOKEN="some_random_password" sh -s - server --cluster-init --disable servicelb

Master 2 and 3

Again for us: control02 and control03

Using following command on both nodes we will add these to cluster as master nodes containing etcd database.

curl -sfL https://get.k3s.io | K3S_TOKEN="some_random_password" sh -s - server --server https://192.168.0.101:6443 --no-deploy servicelb

You can do than with ansible as well:

ansible control02,control03 -b -m shell -a "curl -sfL https://get.k3s.io | K3S_TOKEN="some_random_password" sh -s - server --server https://192.168.0.101:6443" --no-deploy servicelb

For the --server parameter we are using the IP of our primary master node. This will create controll plane for our cluster.

Control plane shoould be done:

root@control01:/home/ubuntu# kubectl get nodes
NAME        STATUS   ROLES         AGE    VERSION
control01   Ready    etcd,master   5d3h   v1.19.4+k3s1
control02   Ready    etcd,master   5d3h   v1.19.4+k3s1
control03   Ready    etcd,master   5d3h   v1.19.4+k3s1

Workers

We need to join some workers now, in our case cube01 to 06

On every worker node do:

curl -sfL https://get.k3s.io | K3S_URL="https://192.168.0.101:6443" K3S_TOKEN="some_random_password" sh -

You can do than with ansible as well:

ansible workers -b -m shell -a "curl -sfL https://get.k3s.io | K3S_URL="https://192.168.0.101:6443" K3S_TOKEN="some_random_password" sh -"

Setting role/labels

We can tag our cluster nodes, to give them labels.

Important

k3s by default allow pods to run on control plane, which can be OK, but in production it would not. However in our case, I want to use disks on control nodes for storage and that does require pods to run on them from Longhorn. So I'll be using labels to tell pods / deployment where to run.

This label is to have nice name when running kubectl get nodes.

kubectl label nodes cube01 kubernetes.io/role=worker
kubectl label nodes cube02 kubernetes.io/role=worker
kubectl label nodes cube03 kubernetes.io/role=worker
kubectl label nodes cube04 kubernetes.io/role=worker
kubectl label nodes cube05 kubernetes.io/role=worker
kubectl label nodes cube06 kubernetes.io/role=worker

Another label/tag. This one I will use to tell deployments to prefer nodes with node-type worker. The node-type is our chosen name for value, you can call it whatever.

kubectl label nodes cube01 node-type=worker
kubectl label nodes cube02 node-type=worker
kubectl label nodes cube03 node-type=worker
kubectl label nodes cube04 node-type=worker
kubectl label nodes cube05 node-type=worker
kubectl label nodes cube06 node-type=worker

Whole Kubernetes cluster:

root@control01:/home/ubuntu# kubectl get nodes
NAME        STATUS   ROLES         AGE    VERSION
control01   Ready    etcd,master   5d3h   v1.19.4+k3s1
control02   Ready    etcd,master   5d3h   v1.19.4+k3s1
control03   Ready    etcd,master   5d3h   v1.19.4+k3s1
cube01      Ready    worker        5d3h   v1.19.4+k3s1
cube02      Ready    worker        5d3h   v1.19.4+k3s1
cube03      Ready    worker        5d3h   v1.19.4+k3s1
cube04      Ready    worker        5d3h   v1.19.4+k3s1
cube05      Ready    worker        5d3h   v1.19.4+k3s1
cube06      Ready    worker        5d3h   v1.19.4+k3s1

You can use also kubectl get nodes --show-labels to show all labels for nodes.

Lastly add following into /etc/environment ( this is so the HELM and other programs know where the kubernetes config is. )

On every node:

echo "KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> /etc/environment
Or use Ansible:

ansible cube -b -m lineinfile -a "path='/etc/environment' line='KUBECONFIG=/etc/rancher/k3s/k3s.yaml'"

Note

There are other options to deploy k3s for example Ansible can deploy everything ( might not end up the same as mine ), for inspiration check out thing git repo: https://github.com/k3s-io/k3s-ansible

And other solution is to do GitOps, make the infrastructure as a code using Flux 2 ( or alternative ), how to setup this I might do separate article but you can have look here in mean time: https://github.com/k8s-at-home/awesome-home-kubernetes For some more inspiration 🙂


Last update: February 8, 2021

Comments