Just wanted to share the knowledge of integrating Conbee2 device /dev/ttyACM0 into a kubernetes deployment.
Initial setup was a docker-compose.yaml like this:
version: "2.1"
services:
homeassistant:
image: lscr.io/linuxserver/homeassistant:latest
container_name: homeassistant
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Bucharest
volumes:
- /opt/homeassistant/data:/config
ports:
- 8123:8123 #optional
devices:
- /dev/ttyACM0:/dev/ttyACM0 #Conbee2
restart: unless-stopped
I wanted to migrate it to a kubernetes node that has the Conbee2 USB attached. I used kompose
cli to convert from docker-compose.yaml to a kubernetes deployment format.
Download kompose:
sudo curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.1/kompose-linux-amd64 -o /opt/homeassistant/kompose
sudo chmod 755 /opt/homeassistant/kompose
Convert docker-compose.yaml into kubernetes deployment:
cd /opt/homeassistant
sudo ./kompose convert
Resulted:
$ ls -latr homeassistant-*
-rw-r--r--. 1 root root 376 Sep 25 20:34 homeassistant-service.yaml
-rw-r--r--. 1 root root 1206 Sep 25 20:34 homeassistant-deployment.yaml
-rw-r--r--. 1 root root 263 Sep 25 20:34 homeassistant-claim0-persistentvolumeclaim.yaml
I changed the Deployment (homeassistant-deployment.yaml
) into a StatefulSet (homeassistant-statefulset.yaml
), so data will persist upon restart, and also updated the file to permit CAP_SYS_RAWIO access on /dev/ttyACM0.
file homeassistant-statefulset.yaml
:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: homeassistant
spec:
replicas: 1
selector:
matchLabels:
app: homeassistant
serviceName: homeassistant
template:
metadata:
labels:
app: homeassistant
spec:
containers:
- name: homeassistant
image: lscr.io/linuxserver/homeassistant:latest
env:
- name: PGID
value: "1000"
- name: PUID
value: "1000"
- name: TZ
value: Europe/Bucharest
securityContext:
privileged: true
capabilities:
add: ["CAP_SYS_RAWIO"]
ports:
- containerPort: 8123
resources: {}
volumeMounts:
- mountPath: /config
name: homeassistant-pvc
- mountPath: /dev/ttyACM0
name: ttyacm
restartPolicy: Always
volumes:
- name: homeassistant-pvc
persistentVolumeClaim:
claimName: homeassistant-pvc
- name: ttyacm
hostPath:
path: /dev/ttyACM0
File homeassistant-claim0-persistentvolumeclaim.yaml
(updated metadata label and size):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: homeassistant-pvc
name: homeassistant-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
File homeassistant-service.yaml
(updated metadata.labels):
apiVersion: v1
kind: Service
metadata:
name: homeassistant
labels:
app: homeassistant
spec:
ports:
- name: "8123"
port: 8123
targetPort: 8123
selector:
app: homeassistant
status:
loadBalancer: {}
Deploy homeassistant:
kubectl --namespace homeassistant apply -f homeassistant-claim0-persistentvolumeclaim.yaml
kubectl --namespace homeassistant apply -f homeassistant-service.yaml
kubectl --namespace homeassistant apply -f homeassistant-statefulset.yaml
Continue normal integration of ZHA. Normally, homeassistant will auto discover your /dev/ttyACM0.
BTW, I’ve installed k3s
and kube-vip
using k3sup
, following this: Deploy HA k3s with kube-vip and MetalLB using k3sup · GitHub
Happy Kubernetes && HA.
PS: Further, I will see how I can write zigbee json backup config (GitHub - zigpy/open-coordinator-backup: Open Zigbee coordinator backup format) to multiple dongle with the same Zigbee config, to have trully High Availability between nodes (that will have attached a conbee2 or sonoff). Thinking of running init-containers that write config on usb before starting container. Of corse, only one homeassistant instance can run on a cluster, but pod will not be forced to run o a specific node.