Skip to main content

K3s Kubernetes

Traefik Ingress

What is it?

From Official website:

Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.

It's basically a bit more advance proxy server for your Kubernetes cluster. Furthermore, it takes domain names and routes them to the correct containers. The image that I shamelessly borrowed from their documentation explains it well.

Inside my home cluster, I prefer metallb to assign external IPs to my services. Mainly because I don't have to deal with DNS. On the other hand, Traefik requires DNS to be configured. If you installed the k3s same as me, you should have Traefik already configured and with assigned IP, in my case its

Personal note here: I do not like Traefik much, it seems to be needlessly complicated and there are too many ways to achieve the same thing in Traefik. Too much choice could be a detriment to the user experience. My personal preference is to use nginx ingress controller, I had better experience with it. But, in the end, I will most likely replace Traefik with proper mesh networking in my home k3s cluster.

You can check that with:

root@control01:~# kubectl get svc -n kube-system | grep traefik
traefik          LoadBalancer   80:31225/TCP,443:32132/TCP     36d

There is a very high chance you do not run your own manageable DNS server in your network, although you might. If you do, setup it to route domain cube.local to the IP of the Traefik service. Again, in my case, it's I'm not going over how to set up a custom DNS server in home, but if you realy realy need one, maybe look at Technitium. I have it in a container and use it from time to time.

But in this example, I'm not going to use host file.

  • Windows: C:\Windows\System32\drivers\etc\hosts
  • Linux: /etc/hosts

They work the same, it's the first place where OS is looking for name resolution, usually it looks in this file first and then do a DNS server query to obtain IP from domain name.

I'm going to add to this file at the end of this file: cube.local

Now, when you type http://cube.local in your browser, it will be resolved to our Traefik Ingress. We basically need to simulate DNS entry.

Now we can create Ingress resources in Traefik, like: cube.local/uicube.local/grafana and route them to their appropriate containers.

Since we do not have a DNS server to handle the translation of the domain to IP, we can't use wildcard DNS. If you wanted to use something like: ui.cube.localgrafana.cube.local. You need to add them to the line cube.local. For example, cube.local ui.cube.local grafana.cube.local to make that work. In normal DNS server you just throw * for that A record, and you are done...
Ingress resource is usually targeting a Kubernetes service ClusterIP. We are targeting the name of the service and port. Important is, that we have to create the Ingress resource in the same namespace as the service we want to export, or it will complain that Traefik can't find the service.

How to use Traefik

Let's look what I have in K3s services already. No need to invent a wheel when we can use something we already have running.

root@control01:~# kubectl get svc --all-namespaces
NAMESPACE          NAME                                      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                         AGE
default            kubernetes                                ClusterIP       <none>          443/TCP                                         37d
kube-system        kube-dns                                  ClusterIP      <none>          53/UDP,53/TCP,9153/TCP                          37d
kube-system        metrics-server                            ClusterIP   <none>          443/TCP                                         37d
kube-system        traefik                                   LoadBalancer   80:31225/TCP,443:32132/TCP                      37d
openfaas           basic-auth-plugin                         ClusterIP      <none>          8080/TCP                                        36d
openfaas           prometheus                                ClusterIP    <none>          9090/TCP                                        36d
openfaas           nats                                      ClusterIP   <none>          4222/TCP                                        36d
openfaas           alertmanager                              ClusterIP    <none>          9093/TCP                                        36d
openfaas           gateway                                   ClusterIP    <none>          8080/TCP                                        36d
openfaas           gateway-external                          NodePort   <none>          8080:31112/TCP                                  36d
openfaas           openfaas-service                          LoadBalancer   8080:31682/TCP                                  36d
openfaas-fn        cows                                      ClusterIP   <none>          8080/TCP                                        36d
openfaas-fn        mailme                                    ClusterIP    <none>          8080/TCP                                        36d
kube-system        kubelet                                   ClusterIP      None            <none>          10250/TCP,10255/TCP,4194/TCP                    35d
openfaas-fn        text-to-speach                            ClusterIP      <none>          8080/TCP                                        28d
logging            loki-stack-headless                       ClusterIP      None            <none>          3100/TCP                                        22d
logging            loki-stack                                ClusterIP    <none>          3100/TCP                                        22d
argocd             argocd-applicationset-controller          ClusterIP     <none>          7000/TCP,8080/TCP                               10d
argocd             argocd-dex-server                         ClusterIP   <none>          5556/TCP,5557/TCP,5558/TCP                      10d
argocd             argocd-metrics                            ClusterIP     <none>          8082/TCP                                        10d
argocd             argocd-notifications-controller-metrics   ClusterIP     <none>          9001/TCP                                        10d
argocd             argocd-redis                              ClusterIP    <none>          6379/TCP                                        10d
argocd             argocd-repo-server                        ClusterIP    <none>          8081/TCP,8084/TCP                               10d
argocd             argocd-server-metrics                     ClusterIP    <none>          8083/TCP                                        10d
argocd             argocd-server                             LoadBalancer   80:30936/TCP,443:32119/TCP                      10d
redis-server       redis-server                              LoadBalancer   6379:31345/TCP                                  36d
docker-registry    registry-service                          LoadBalancer   5000:31156/TCP                                  36d
longhorn-system    longhorn-backend                          ClusterIP    <none>          9500/TCP                                        37d
longhorn-system    longhorn-engine-manager                   ClusterIP      None            <none>          <none>                                          37d
longhorn-system    longhorn-admission-webhook                ClusterIP    <none>          9443/TCP                                        9d
longhorn-system    longhorn-replica-manager                  ClusterIP      None            <none>          <none>                                          37d
longhorn-system    longhorn-conversion-webhook               ClusterIP    <none>          9443/TCP                                        9d
longhorn-system    csi-attacher                              ClusterIP     <none>          12345/TCP                                       9d
longhorn-system    csi-provisioner                           ClusterIP   <none>          12345/TCP                                       9d
longhorn-system    csi-resizer                               ClusterIP    <none>          12345/TCP                                       9d
longhorn-system    csi-snapshotter                           ClusterIP      <none>          12345/TCP                                       9d
longhorn-system    longhorn-frontend                         LoadBalancer   80:32276/TCP                                    37d
monitoring         kube-state-metrics                        ClusterIP      None            <none>          8080/TCP,8081/TCP                               34d
monitoring         node-exporter                             ClusterIP      None            <none>          9100/TCP                                        35d
monitoring         grafana                                   LoadBalancer   3000:31800/TCP                                  34d
monitoring         prometheus-operator                       ClusterIP      None            <none>          8080/TCP                                        8d
portainer          portainer                                 NodePort   <none>          9000:30777/TCP,9443:30779/TCP,30776:30776/TCP   5d18h
portainer          portainer-ext                             LoadBalancer   9000:32244/TCP                                  5d18h
argo-workflows     argo-workflow-argo-workflows-server       LoadBalancer   2746:30288/TCP                                  5d17h
monitoring         prometheus-operated                       ClusterIP      None            <none>          9090/TCP                                        5d4h
vault-system       vault-internal                            ClusterIP      None            <none>          8200/TCP,8201/TCP                               4d16h
vault-system       vault                                     ClusterIP    <none>          8200/TCP,8201/TCP                               4d16h
vault-system       vault-ui                                  ClusterIP    <none>          8200/TCP                                        4d16h
vault-system       vault-agent-injector-svc                  ClusterIP      <none>          443/TCP                                         4d16h
monitoring         prometheus-external                       LoadBalancer   9090:32278/TCP                                  5d4h
monitoring         prometheus                                ClusterIP     <none>          9090/TCP                                        5d4h
vault-system       vault-ui-fix-ip                           LoadBalancer   8200:32672/TCP                                  3d16h
vault-system       vault-fix-ip                              LoadBalancer   8200:30635/TCP,8201:32551/TCP                   3d16h
kube-system        sealed-secrets                            ClusterIP     <none>          8080/TCP                                        3d4h
external-secrets   external-secrets-webhook                  ClusterIP   <none>          443/TCP                                         3d3h

Ugh!, that's a lot already setup on my cluster, but if you followed my guide you should have this one:

longhorn-system    longhorn-frontend                         LoadBalancer   80:32276/TCP                                    37d

Well, that's a service alright, but not the type we need. We need an internal ClusterIP service. I'm going to describe the longhorn-frontend service and take some info from it for new clusterIP service.

root@control01:~# kubectl describe service longhorn-frontend -n longhorn-system
Name:                     longhorn-frontend
Namespace:                longhorn-system
Labels:                   app=longhorn-ui
Annotations:              <none>
Selector:                 app=longhorn-ui
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
LoadBalancer Ingress:
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32276/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

What can we take from this?

  • Selector: app=longhorn-ui - This is telling us that the service is choosing pod where the UI is by the label app=longhorn-ui.
  • Endpoints: - The IP here is not important, but the port is. 8000 is the port exposed by the running pod, where the UI is running.

Base on these two information, we can construct new service.


kind: Service
apiVersion: v1
  name: longhorn-int-svc
  namespace: longhorn-system
  type: ClusterIP
    app: longhorn-ui
  - name: http
    port: 8000
    protocol: TCP
    targetPort: 8000

This is the minimal configuration needed for this service. After we apply this configuration, It will create ClusterIP service targeting pod with app=longhorn-ui label and expose port 8000. I have also named the port http to make it easier to refer back later.

root@control01:~# kubectl apply -f longhorn-internal-svc.yaml
service/longhorn-int-svc created
root@control01:~# kubectl get svc -n longhorn-system | grep longhorn-int-svc
longhorn-int-svc              ClusterIP      <none>          8000/TCP       80s
root@control01:~# kubectl describe svc longhorn-int-svc -n longhorn-system
Name:              longhorn-int-svc
Namespace:         longhorn-system
Labels:            <none>
Annotations:       <none>
Selector:          app=longhorn-ui
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
Port:              http  8000/TCP
TargetPort:        8000/TCP
Session Affinity:  None
Events:            <none>

Now, I'm sure you are asking. Vladimir, I know you managed to use 2 in binary code, but how do I expose this via Traefik ?

We need to create a Ingress object definition. To tell Traefik how, what and where to expose. Let's do it now. File longhorn-ingress-traefik.yaml

kind: Ingress
  name: longhorn-ing-traefik
  namespace: longhorn-system
  annotations: traefik
  - host: "longhorn.cube.local"
      - path: /
        pathType: Prefix
            name: longhorn-int-svc
              number: 8000
root@control01:~# kubectl apply -f longhorn-ingress-traefik.yaml configured

Make sure you have something like cube.local longhorn.cube.local in your host file, for this to work. When you go to that URL http://longhorn.cube.local/ you should see the UI.

New way to work with Traefik

Above work in this simple manner, if you want to expose the service on something.domain.url but what if you want to expose it on domain.url/longhorn for example ?

I mean, you can try this:

kind: Ingress
  name: longhorn-ing-traefik
  namespace: longhorn-system
  annotations: traefik
  - host: "cube.local"  #<-- See here
      - path: /longhorn #<-- See here
        pathType: Prefix
            name: longhorn-int-svc
              number: 8000

But that would only work for some simple services. Specifically with longhorn UI, you will face an issue where the page starts to load, but the service is expecting to be on the root path. You get missing CSS or scripts, because the longhorn UI will look for them in domain.url/css... and not domain.url/longhorn/css....

There is a different way to expose service via Traefik.

Routers, Middlewares, Services

New Traefik v.2.x added CRDs to your Kubernetes. Custom Resource Definitions (CRDs) are a way to define a new type of resource in your cluster. In this case, we are going to define a new type of resource called IngressRoute.

File: longhorn-IngressRoute-traefik.yaml

kind: IngressRoute
  name: longhorn-ing-traefik
  namespace: longhorn-system
    - web
    - match: Host(`cube.local`) && PathPrefix(`/longhorn`)
      kind: Rule
        - name: longhorn-int-svc
          port: http
        - name: longhorn-add-trailing-slash
        - name: longhorn-stripprefix

Although it looks similar, there is lots to unpack.

  • apiVersion - Here we are calling specifically Traefik API extension of our Kubernetes cluster.
  • kind - Here we are defining the type of resource we are going to create. In this case, we are creating IngressRoute. Which is specific for this CRD type.
  • metadata - The same as before, name of the IngressRoute and namespace.
  • entryPoints - New thing in our definition, web means port 80 in our case. This is pre-defined in Traefik. But you can create your own entry points. Also, for example, the other pre-defined entry point is websecure which basically means port 443. As mentioned, you can create a YAML file with your own entry point and specify various parameters for it. Read more here EntryPoints
  • match - What we are matching with this routing rule. In this case, we are matching with Host(cube.local) and PathPrefix(/longhorn). So http://cube.local/longhorn/ will be routed to longhorn-int-svc service.
  • services - This is the service we are going to expose. Same as before, except I did not specify port just its name, you can write it the same as before with port number if you like.
  • middlewares - Another new thing in our definition. middlewares are modifications. You can apply more than one, but they are going into the effect only after the match already happened, and before forwarding further to the service. This modification can be for example to remove www from URL, add custom headers or add list of IPs allowed to connect and so on. We will create our own middleware called longhorn-stripprefix and onghorn-add-trailing-slash to deal with the css request issues. More info about middlewares.


File: longhorn-middleware1-traefik.yaml

kind: Middleware
  name: longhorn-add-trailing-slash
  namespace: longhorn-system
    regex: ^.*/longhorn$
    replacement: /longhorn/

We are going to use redirectRegex type of middleware to redirect all requests to /longhorn to /longhorn/. I'm not great at regex, so I hope this is ok, what we need is that longhorn is expecting the domain to end with / to work properly. But if we define the prefix just as /longhorn it will not work. If we define it as /longhorn/ it will work but any requests to /longhorn would get 404. Therefore, this first modification will add / at the end. I kind of feel that this is fixing issue with application, not actual issue with Traefik...

File: longhorn-middleware2-traefik.yaml

kind: Middleware
  name: longhorn-stripprefix
  namespace: longhorn-system
      - /longhorn

Here we will remove /longhorn from the URL, that is reaching the service. This way, Longhorn should think its receiving request to domain.url/ and not domain.url/longhorn/, which in many cases could cause issues.

Not sure if it's better to keep Middleware definition in a separate file, that can cause tracking issue what is for what, on other hand you can reuse middleware definition in multiple IngressRoute definitions.

Apply all:

kubectl apply -f longhorn-IngressRoute-traefik.yaml
kubectl apply -f longhorn-middleware1-traefik.yaml
kubectl apply -f longhorn-middleware2-traefik.yaml

And now I can access longhorn UI on http://cube.local/longhorn or http://cube.local/longhorn/ both works fine. I honestly do not know if this is actually the correct way to do it in this case, but it works.

In any case, I will most likely replace Traefik with some proper service mesh option in the future. Or soon as possible. Expect some info about that soon.

Did you like this article? Have a drink, and maybe order one for me as well 🙂.