Migrating unifi-controller from Docker to k3s Kubernetes

Wifi Controller

When I first got a Ubiquiti access point I ran the controller software on my Macbook. This was fine but I didn’t want to have to keep my laptop on all the time and configuring the wireless network while also connected to the wireless network was a pain.

I moved the controller to my desktop computer which is always on and has a wired connection to my network. This was better but I had to remember the long, complicated docker run ... command for starting and upgrading the controller container.

1docker create --name=unifi \
2    -v $HOME/unifi:/config \
3    -e PGID=$(id -g) \
4    -e PUID=$(id -u) \
5    -p 127.0.0.1:8443:8443 \
6    -p 192.168.0.100:3478:3478 \
7    -p 192.168.0.100:8080:8080 \
8    linuxserver/unifi-controller:latest

I had the controller persistent storage at $HOME/unifi and mounted it in the container.

I’ve got a k3s Kubernetes cluster running on the same desktop computer and decided that it would be easier to manage the controller and it’s lifecycle as a Kubernetes workload. This has a few advantages;

  • I can store the Kubernetes configuration in source control
  • I can easily apply more complex security policies
  • Upgrades are straightforward - change the image tag
  • Kubernetes API configuration can be ported to new clusters

Migrating

I want to keep the existing configuration and state for the controller and since I only have a single k3s node I can use a hostMount volume to the Pod.

I would like to have valid TLS certificates for the controller instead of the self-signed ones that come with the container image. I’ve got cert-manager running in my k3s cluster already that does DNS-01 validation for a domain I own.

We can create a Deployment resource for the controller. Importantly we need to set the ports that are used for the admin UI and communication with the access points on our network.

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4    labels:
 5        app: unifi-controller
 6    name: unifi-controller
 7spec:
 8  replicas: 1
 9  selector:
10    matchLabels:
11      app: unifi-controller
12  strategy:
13    type: Recreate
14  template:
15    metadata:
16      labels:
17        app: unifi-controller
18    spec:
19      containers:
20      - image: linuxserver/unifi-controller:7.3.83
21        name: controller
22        resources: {}
23        env:
24        - name: PUID
25          value: "1000"
26        - name: PGID
27          value: "1000"
28        volumeMounts:
29        - name: config
30          mountPath: /config
31        ports:
32        - containerPort: 8443
33          name: https
34        - containerPort: 3478
35          name: stun
36          protocol: UDP
37        - containerPort: 8080
38          name: device-control
39        - containerPort: 10001
40          name: discovery
41          protocol: UDP
42      volumes:
43      - name: config
44        hostPath:
45          path: /var/lib/unifi

The linuxserver.io docs for the controller image has a reference table for the ports that we can expose.

K3s has a built-in load balancer controller, klipper-lb that is a essentially a bash script to create port forwarding and NAT rules on the host in iptables. We can expose the discovery, stun and device-control ports on the host IP address. This allows the controller to manage, and discover access points in our network.

 1---
 2apiVersion: v1
 3kind: Service
 4metadata:
 5  labels:
 6    app: unifi-controller
 7  name: unifi-controller-devices
 8spec:
 9  ports:
10  - name: device-control
11    port: 8080
12    targetPort: device-control
13    protocol: TCP
14  - name: stun
15    port: 3478
16    targetPort: stun
17    protocol: UDP
18  - name: discovery
19    protocol: UDP
20    targetPort: discovery
21    port: 10001
22  selector:
23    app: unifi-controller
24  type: LoadBalancer

Ingress and UI

Now we need to expose the Admin UI and use valid TLS certificates on our ingress controller. K3s comes bundled with Traefik and I’ve chosen to use the IngressRoute custom resource instead of the Kubernetes native Ingress. I prefer the simplicity of IngressRoute and since I’m all in on k3s I don’t need to worry about portability.

In the IngressRoute spec we set a custom serversTransport resource that disables strict TLS verification. This is because the controller uses a self signed certificate for the admin UI and it’s non-trivial to replace it.

Where possible I’d always prefer to have TLS terminated at the application and not just the ingress controller. Even though the TLS certificat in the controller is self-signed, it’s marginally better than nothing.

 1---
 2apiVersion: cert-manager.io/v1
 3kind: Certificate
 4metadata:
 5  name: unifi-k3s-cluster
 6  namespace: unifi
 7spec:
 8  secretName: unifi-k3s-cluster-tls
 9  issuerRef:
10    name: k3s-cluster-letsencrypt-production
11    kind: ClusterIssuer
12  dnsNames:
13  - unifi.k3s-cluster.xyz
14---
15apiVersion: traefik.containo.us/v1alpha1
16kind: IngressRoute
17metadata:
18  namespace: unifi
19  name: unifi-controller
20spec:
21  routes:
22  - match: Host(`unifi.k3s-cluster.xyz`)
23    kind: Rule
24    services:
25    - name: unifi-controller-admin
26      port: 8443
27      scheme: https
28      serversTransport: insecure-https
29  tls:
30    secretName: unifi-k3s-cluster-tls
31---
32apiVersion: traefik.containo.us/v1alpha1
33kind: ServersTransport
34metadata:
35  name: insecure-https
36  namespace: unifi
37spec:
38  insecureSkipVerify: true

Now I’ve got the controller running in Kubernetes! I can easily upgrade, rollback, and manage the controller as a workload in my cluster. This beats the experience running it as a container in Docker, even with the overhead of k3s.

I’ve got a wildcard DNS record that resolves to the Tailscale IP address of my desktop computer so that I can it the “load balancer” from any device on my Tailnet. This creates a secure, and private Ingress I can reach from anywhere.

1$ dig unifi.k3s-cluster.xyz +short
2100.75.145.187

Access Point Configuration

I got a new access point second hand on ebay and wanted to add it to my network. After factory resetting it and plugging it in I was able to SSH onto the AP and have it discover the controller.

1set-inform http://192.168.0.99:8080/inform