Penryn
(Mike)
October 11, 2020, 2:50pm
1
Hello maybe anyone can help me but I try to mount my configuration.yaml to my Homeassistant that runs in Kubernetes as a pod.
I delploy HA only to a specific node in my cluster so there is a folder where configuration.yaml and other files stored
but I can’t see that this files are mounted within the pod when I connect to the pod
here is my manifest
`apiVersion: v1
kind: List
items:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ha
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: “/mnt/kubernetes/homeassistant”
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: homeassistant-storage
namespace: homeassistant
spec:
storageClassName: manual
accessModes:
ReadWriteOnce
resources:
requests:
storage: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: homeassistant-deployment
labels:
apps: home-assistant
spec:
replicas: 1
selector:
matchLabels:
app: home-assistant
template:
metadata:
labels:
app: home-assistant
spec:
volumes:
- name: ha-storage
persistentVolumeClaim:
claimName: homeassistant-storage
containers:
- name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:0.116.2
volumeMounts:
- mountPath: “/config”
name: ha-storage
apiVersion: v1
kind: Service
metadata:
name: homeassistant-service
spec:
selector:
app: home-assistant
ports:
- nodePort: 31123
port: 8123
targetPort: 8123
protocol: TCP
type: NodePort`
It’s kinda hard to read because the formatting is messed up in your config, but here is my config that works which you can use as a basis. You’ll need to put your own IPs in etc.
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
io.kompose.service: homeassistant-config
name: homeassistant-config
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: iscsi
persistentVolumeReclaimPolicy: Retain
iscsi:
targetPortal: 192.168.x.x
iqn: iqn.1991-05.com.microsoft:gamma-gamma-1-target
lun: 3
fsType: "ext4"
claimRef:
namespace: default
name: homeassistant-config
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
io.kompose.service: homeassistant-config
name: homeassistant-config
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: homeassistant
name: homeassistant
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: homeassistant
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.service: homeassistant
spec:
containers:
- env:
- name: PYTHONWARNINGS
value: ignore:Unverified HTTPS request
image: homeassistant/home-assistant
name: homeassistant
ports:
- containerPort: 1900
protocol: UDP
- containerPort: 5353
protocol: UDP
- containerPort: 8123
volumeMounts:
- mountPath: /config
name: homeassistant-config
resources:
requests:
memory: 500Mi
cpu: 250m
limits:
memory: 1000Mi
cpu: 500m
volumes:
- name: homeassistant-config
persistentVolumeClaim:
claimName: homeassistant-config
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: homeassistant
name: homeassistant-tcp
annotations:
metallb.universe.tf/allow-shared-ip: "true"
spec:
ports:
- name: "http"
port: 80
targetPort: 8123
selector:
io.kompose.service: homeassistant
type: LoadBalancer
loadBalancerIP: 192.168.x.x
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: homeassistant
name: homeassistant-udp
annotations:
metallb.universe.tf/allow-shared-ip: "true"
spec:
ports:
- name: "upnp"
port: 1900
protocol: UDP
targetPort: 1900
- name: "mdns"
port: 5353
protocol: UDP
targetPort: 5353
selector:
io.kompose.service: homeassistant
type: LoadBalancer
loadBalancerIP: 192.168.x.x
I’m planning on moving my actual config to a configmap soon, but due to the decision (discussed many a time!) for not all config to be in the config file, home assistant will always be hard to run properly on kubernetes (imho) and I’m not just using HA as a gateway to between MQTT and GA (as I can’t find a simpler way atm!)
1 Like
Penryn
(Mike)
October 11, 2020, 5:42pm
3
Thank u very much and sorry for the mess
I have also fixed after figure out what kind of path I had choose and where to find on the node
1 Like