Install Service Monitors
Longhorn Servicemonitor
Our storage provisioner Longhorn, that we deployed somewhere near the start of this whole K3s Kubernetes cluster setup, also natively provides data for Prometheus.
Create a new folder, monitoring
, that we will put most of our configs in, and create the file longhorn-servicemonitor.yaml
.
Edit the longhorn-servicemonitor.yaml
:
As you can see, we are not talking to Kubernetes API (we are... but...), but to apiVersion: monitoring.coreos.com/v1
, so we are basically telling Prometheus Operator to create something for us. In this case it’s kind: ServiceMonitor
.
Makes sense, right? Next, metadata
, and here I'm not 100% sure about the name:
and below labels: -> name:
. I know that we refer to these later, or one of them, when we tell Prometheus which Service Monitor to collect data from.
This should be clear, metadata: -> namespace: monitoring
, we are telling it to deploy into our monitoring namespace.
The rest under spec:
is basically telling what app the Service Monitor should "bind to". It’s looking for app: longhorn-manager
in namespace longhorn-system
and port: manager
. This port could be a port number, but it also can have a name, so in this case it’s named manager
.
This is the longhorn-manager
we are targeting.
And if you try to describe it with:
You get that the port it is using is 9500/TCP, but I don't know where it’s set that manager
== 9500
. If you know, please comment below.
Node-exporter
This is the daemon set we will deploy to collect metrics from individual cluster nodes, underlying HW, etc...
In the monitoring folder, create a new folder called ‘node-exporter’ and create the following files in it:
cluster-role-binding.yaml
cluster-role.yaml
service-account.yaml
service.yaml
daemonset.yaml
service-monitor.yaml
You can deploy all the YAML files by going in folder above and apply -f on the folder.
This will create all permissions, and deploy the pod with the application Node Exporter, that will read metrics from Linux.
After doing so, you should see node-exporter-xxxx pods in the monitoring
namespace; I have 8 nodes, so it’s there 8 times.
Kube State Metrics
This is a simple service that listens to the Kubernetes API, and generates metrics about the state of the objects.
Link to official GitHub: kube-state-metrics.
Again as before, create a new folder in our monitoring folder, called kube-state-metrics
. Create the following files in it:
kube-state-metrics-clusterRole.yaml
kube-state-metrics-clusterRoleBinding.yaml
kube-state-metrics-serviceAccount.yaml
kube-state-metrics-service.yaml
kube-state-metrics-deployment.yaml
kube-state-metrics-serviceMonitor.yaml
And again, jump one folder up and apply everything:
Check the pods in the monitoring
namespace if you have kube-state-metrics-xxx up and running:
We have two more Service Monitors to go. 🙂
Kubelet
Kubelet, in case you did not know, is an essential part of Kubernetes’ control plane, and is also something that exposes Prometheus metrics by default in the port 10255. So, it makes sense to create a Service Monitor for it as well.
‘But’, you surely ask me, ‘I just deployed kube-state-metrics, why the fuck do I need another Kubelet thingy monitor?’ Well, kube-state-metrics collects lots of data, and some of them overlap with Kubelet provided metrics, but not all; some information can be collected only from Kubelet.
We only need to create one file: kubelet-servicemonitor.yaml
:
Apply, and it’s done.
Traefik
I do not use Traefik much in my setup, but it is there, and it also exposes Prometheus-ready data, so why not...
Create file: traefik-servicemonitor.yaml
:
And apply:
OpenFaaS
Don't worry about this one. if you deployed as described in my guide/notes you already have Prometheus set up and collecting data.
We will point Grafana to suck data from this instance later as well.
Done for now
I know, so much text! I could probably print the whole K3s Kubernetes cluster setup as a book, drop it on somebody, and it would flatten them 🙂.
We should now have the following Service Monitors up and ready to be scraped by Prometheus.
Phew! We have the hardest and longest part behind us. You deserve a drink, and if you found this useful, help me to get one too. I would appreciate that a lot.
Move on to Prometheus