Kubernetes Helm Chart

Hi @billimek, for the HomeKit integration the default port is 51827. But because for any integration that binds a port you can (usually) set one as you like, it would be nice if there was a value where you can add (expose) any port you like.

nvm. Had to use the hostNetwork for HomeKit so no additional service and port mapping is necessary anymore.

Hello,

pod is running fine using the chart. I can access it via port-forward on port 8123.

However, I cannot acess it via an ingress (nginx) on the cluster. I specified

–set ingress.enabled=true --set ingress.path=/hass/ --set ingress.hosts[0]=“srv01.fritz.box”

Accessing srv01.fritz.box/ gives me an error from default backend (which is expected), but accessing srv01.fritz.box/hass/ yields a 404 which is not expected.

Log from nginx-ingress pod says:

192.168.178.95 - [192.168.178.95] - - [12/Feb/2020:19:35:19 +0000] “GET /hass/ HTTP/1.1” 404 14 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36” 449 0.008 [smarthome-hass-home-assistant-8123] 10.42.2.7:8123 14 0.008 404 db01e…

Does anyone spot a problem with this ?

Lars

Got it working now, removed the /hass part as it seems impossible to set a base url with a path.

Lars

For those integrations that require opening ports you can with the latest version of the chart and the additionalPorts option there. See the PR for more details.

I am also thinking about adding AppDaemon as an optional container in a similar way as vscode or configurator. I am aware of @runningman84 separate Helm chart but I am planning to use the HACS to download the apps for me which then go into a folder in the Home Assistant config folder. A nice side effect is that I can use the same vscode / git support from the HA chart.

@billimek @runningman84 - would you accept such PR ?

You are welcome I will look into your appdaemon pr. I can imagine to have a better developer experience once I can use the vscode addon.

I got it working for me and yes, it is great to just click in HACS to download an app, then open the vscode to add the example config and ready to go. This makes app daemon way easier to use and share apps.I think I will start creating some apps to allow things like automatically generate yamls out of template files and converting some scripts I had in iobroker.

I will update the chart readme and do a PR in the next minutes.

Update
PR available . Please notice that it is using v3 since the docker container for v4 is still pending an update to setting up timezone, latitude and longitude which are mandatory in v4. I will do a PR for that in appdaemon. Until then you can manually set the tag to v4 and manually edit the appdaemon config file according to the instructions for v4.

@runningman84 - please take a look and merge if ok.

Hey Guys,
Just wondering how are you managing mDNS/Avahi for autodiscovery in your k8s cluster.

1 Running a pod to get mdns
2 enabled reflector in avahi config on the k8s nodes.

Currently I have a docker setup with reflector in avahi config, which lets me use docker network instead of hostnetwork with no issues. Planning to move to k8s, this seems to be my only concern.

Please share your configs and experience with this

I use the following deployment based on a mDNS repeater POD I created:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mdns-repeater
  labels:
    app: mdns-repeater
spec:
  replicas: 1
  revisionHistoryLimit: 0
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: mdns-repeater
  template:
    metadata:
      labels:
        app: mdns-repeater
    spec:
      hostNetwork: true
      containers:
      - name: mdns-repeater
        image: angelnu/mdns_repeater:latest
        imagePullPolicy: Always
        resources:
          requests:
            memory: "2Mi"
            cpu: "5m"
          limits:
            #memory: "128Mi"
            #cpu: "500m"
        env:
        # - name: hostNIC
        #   value: eth0
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: dockerNIC
          value: weave
        # Verbose mode
        # - name: options
        #   value: -v

I am progressing converting everything to Helm charts but not this one yet. I use weave - if you use a different network plugin you will need to pick up a different host NIC.

This setup allows me to autodiscover Google Home devices.

Thanks a ton for the deployment yaml. I’m using calico not sure what dockerNIC interface should I put here.

tried docker0

here is my ifconfig

cali3ea492d112b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 2380479  bytes 262326716 (262.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2470945  bytes 1018098195 (1.0 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cali4608095673d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 130  bytes 11197 (11.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 222  bytes 21324 (21.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cali8416f7cc2c6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 425977  bytes 37274112 (37.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 445705  bytes 215915561 (215.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cali911909d83e6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 2697967  bytes 462599665 (462.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2694232  bytes 908378748 (908.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calia54305cf37a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 1277491  bytes 118469537 (118.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1325097  bytes 701999239 (701.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calib6a5eaa18d7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 544629  bytes 55143317 (55.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 644091  bytes 227080316 (227.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:fcff:fe09:fd86  prefixlen 64  scopeid 0x20<link>
        ether 02:42:fc:09:fd:86  txqueuelen 0  (Ethernet)
        RX packets 70759  bytes 15385293 (15.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 505399  bytes 63657278 (63.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.20.20.7  netmask 255.255.255.0  broadcast 10.20.20.255
        inet6 fe80::20c:29ff:fec0:7a98  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:c0:7a:98  txqueuelen 1000  (Ethernet)
        RX packets 45488845  bytes 41018059127 (41.0 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19088881  bytes 5309305812 (5.3 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2333510  bytes 181752672 (181.7 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2333510  bytes 181752672 (181.7 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        inet 10.244.103.0  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 1505483  bytes 15257168536 (15.2 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2061800  bytes 332634317 (332.6 MB)
        TX errors 0  dropped 2 overruns 0  carrier 0  collisions 0

veth325e667: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc90:c2ff:fe54:5978  prefixlen 64  scopeid 0x20<link>
        ether fe:90:c2:54:59:78  txqueuelen 0  (Ethernet)
        RX packets 70759  bytes 16375919 (16.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 505965  bytes 63720705 (63.7 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Any suggestions?

I would tip for ens160 or tunl0, most likely the second. You can tell by going into a regular pod in the same K8S node, checking the eth0 subnetwork and making sure it is the same as for the NIC in the host.

In my case I have:

7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    inet 10.40.0.0/12 brd 10.47.255.255 scope global weave

and inside a POD in the same node I see

eth0      Link encap:Ethernet  HWaddr 12:14:FA:19:3D:71  
          inet addr:10.40.0.2  Bcast:10.47.255.255  Mask:255.240.0.0

I’ve been migrating my custom deployment to this helm chart as well, but I was missing some configuration options to completely migrate. I’ve been working on a PR to enhance the chart, hopefully you’ll find it useful and it can be merged:

@billimek Have you already given any thought about where you want to host this chart once the stable Github repository will de deprecated? (see https://github.com/helm/charts/blob/master/README.md#deprecation-timeline)

Hi @Juggels, yes in fact I have an issue for this in my ‘personal’ charts repo.

This is the same charts repo where the home-assistant chart started and will likely be where I propose moving the chart back to.

Ideally, it would be great if the home-assistant helm chart would live inside the home-assistant docker repo with the appropriate automation (GitHub Actions preferably) to lint, test, and publish the chart.

1 Like

Thanks for this fellas, awesome work. Im getting into kubernetes and have deployed ha. I was actually trying to run 2 replicas, the issue Im running into is the config dir. the replicas are running on dif nodes, say ha-1 on worker-1 and ha-2 on worker-2, they both created the config dir and store the files locally. I really wouldnt mind using a nfs mount that they both had access too. Has anyone attemted this and could point me to some docs that may help or have any tips?

(As I was typing this I just had the idea to just mount the same export on each host and use that for the config dir…Im not sure if ha will like that but I will give it a shot…i actually believe this is where persistant claims may come in to play but i have been fiddling with those for a couple days now and not havnig any luck)

Hi @Darbos, I don’t believe Home Assistant is capable of running properly with multiple repliacs/instances in our out of Kubernetes.

You could, in theory, leverage shared storage for a single config volume, but Home Assistant itself would likely behave in a non-optimal manner. Consider automations: two instances of Home Assistant would not know about each-other, operating off of the same shared storage. Automations would likely fire twice. Consider the built-in database for the recorder to persist ‘state’. Two different ‘instances’ of home assistant would attempt to update or change the same thing within the database at the same time.

There have been a number of posts discussing how to run Home Assistant in a multi-instance or availability mode, but from what I recall, it would require home assistant to undergo a non-trivial effort to properly support running this way.

Instead, with a proper storage backend, running Home Assistant in kubernetes does give you some level of availability/redundancy. If the ‘node’ where Home Assistant runs has a problem or is otherwise not available, the kubernetes scheduler will ‘move’ the Home Assistant runtime/pod to another healthy node.

1 Like

Thanks for the thorough explanation @billimek. That all makes sense, I didn’t think about the automations, I’ll stick to one replica. Thanks again for the info and sharing all the work you put I to this.

Hey hoping you guys don’t mind me bugging you once again. I’ve been going through the udemy training and doing some on my own as I learn. I’ve had home assistant running in kubernetes fine but I was using the cheap route…using nodeport since it’s all I understood at the time. Ive deployed metalb now since load balancers seems to be what should be used in production. I understood I can give the ha pod an external ip / hostname this way. My question is this how you all are handling networking and does this allow discovery?

Everything in ha worked in my previous setup except things like chormecasts and Sonos. I’m hoping this will solve that.

I’m running kubernetes in vms right now as I learn. 3 Ubuntu 18 vms. I also dont expose anything outside my network to the internet, I use VPN while away from home.

Thanks again for your help!

First: Thank you for your work. As a helm newbe i much appreciated it!

Maybe i like to migrate my home assistant vm to a small k3s cluster. I tried this helm chart und home assistant is runing. HomeKit is very important for me, because we mostly control our devices via Home-APP or Siri. With the hostnetwork parameter i can intregrate homekit.
But what if the container is switching to a new kubernetes host? I cannot find information about a special matching for ip addreses.
Do i have to configure something special in this chart (i allready use the treafik ingress) to work over my cluster?

Hi,

how is appdaemon.haToken supposed to be used ? It seems I need to set it to some bogus value so that helm then uses a secret to fill the TOKEN environment variable for appdaemon ?

Lars