Skip to content

Network setting

K3s will come with pretty much everything pre-configured like traefik.

More info about traefik: or small guide from me Traefik.

However, I would like to have LoadBalancer, and in essence to be able to give services (pods) an external IP, just like my Kubernetes nodes, not from internal Kubernetes ranges. Normally, this is an external component, and your cloud provider should somehow magically give that to you, but since we are our own cloud provider, and we are trying to keep everything in one cluster... in short MetalLB is the answer.

What is MetalLB


This is a two-step process: we deploy MetalLB load balancer, and then push configuration to it, and tell it what range of IPs to use.

Apply the following: first we will create a namespace called metallb-system, and second we will deploy MetalLB into it.


Look here for the most up-to-date version of metallb links.

kubectl apply -f
kubectl apply -f


I prefer to store configuration files in folders named after components we deploy to cluster so that I can easily delete the service later without looking at links pointing to the Internet. So, you can just create folder MetalLB, and download the YAML file into it for later use.

We need to create a secret key for the speakers (the MetalLB pods) to encrypt speaker communications:

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

This key is just for internal use, and you don't need to worry about it.


Next, create config.yaml in your MetalLB folder; here we are going to tell MetalLB what IPs to use:

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2

As you can see, I specified a range from to That will give me 50 "external" IPs to work with for now.

Apply the config:

kubectl apply -f config.yaml

You can always go back and change the IP range if you want, and apply the config again.


Check if everything deployed OK

root@control01:~/metallb# kubectl get pods -n metallb-system
NAME                         READY   STATUS    RESTARTS   AGE
controller-57fd9c5bb-rdl7v   1/1     Running   0          5m42s
speaker-h7chj                1/1     Running   0          5m42s
speaker-pg7kp                1/1     Running   0          5m42s
speaker-78pdz                1/1     Running   0          5m42s
speaker-ghpxz                1/1     Running   0          5m42s
speaker-8cf7k                1/1     Running   0          5m42s
speaker-2t6jp                1/1     Running   0          5m42s
speaker-cjcpn                1/1     Running   0          5m41s
speaker-mv7v4                1/1     Running   0          5m42s

You should have as many speaker-xxxx as you have nodes in the cluster, since they run one per node.

Now services that use LoadBalancer should have an external IP assigned to them.

For example:

root@control01:~/metallb# kubectl get svc --all-namespaces
NAMESPACE     NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
default       kubernetes       ClusterIP       <none>          443/TCP                      12m
kube-system   kube-dns         ClusterIP      <none>          53/UDP,53/TCP,9153/TCP       12m
kube-system   metrics-server   ClusterIP   <none>          443/TCP                      12m
kube-system   traefik          LoadBalancer   80:31225/TCP,443:32132/TCP   10m

Look how traefik automatically got an IP from the external range. In the end, this is what you would want: Not to point to a single node IP and be redirected based on DNS, which would stop working the moment the node with that IP died. This way, we make the external IP node independent. Now, you can point DNS to that IP and be sure it will be routed correctly.


This is how I prefer my network settings, and makes most sense to me when creating external services. I'm sure there are like a hundred different methods using external load balancers, Nginx ingress (basically reverse proxy) and who knows what in production, but hey, there is no official The one standardized setting for Kubernetes (which can be such a pain sometimes) so who’s to say this is not OK? 🙂

Last update: July 4, 2022