I have been running home assistant core in a venv in an LXD container for a number of years and it has worked flawlessly. Being in a container has allowed me to easily take snapshots, back it up, move it to other machines and run my other hass and non-hass related programs in other containers. More recently I have been looking into getting hassos running in an LXD virtual machine (versus container) so I could take advantage of the supervisor and other nice features that you do not get when running core. In my mind this is a great way to get the advantages of hassos and the supervisor running in a controlled environment (I totally get the potential support issues caused by running supervisor in uncontrolled environments), but still allow me to run other things given that I have relatively powerful hardware.
The good news is that I have gotten it working after creating an lxd image using the hassos qcow2 virtual appliance. It has been running in a test environment for a week or two without any issues. The bad news is that it took me quite a bit of digging to figure out how to get to this point (but perhaps that is just because I am not the sharpest tool in the shed) and it would be a better end-user experience if a few tweaks were made to the hassos image for this purpose. For example, pre-installation of the lxd-agent would make the vm easier to work with and the boot up console messages and login prompt are not visible on a regular console using LXC (it took me a while to figure out that I needed to use Remote Viewer, and this may just be a question of changing the kernel video mode). As such, I am now in the process of attempting to build my own qcow2 using the build tools / scripts provided by the hassos team. However, while the tools are good (and the hassos team is great), the same cannot be said about me so I may fail miserably in my attempt.
All that being said, I am curious whether anyone else would be interested in running hassos as an LXD/LXC virtual machine or if anyone has experience doing so and has any tips or suggestions. If there is any interest it would ultimately be amazing if an āofficialā LXD image was created that could be used for this purpose so everyone else does not have to go through the painful process that I have gone through to figure out how to get it working. That would mean that getting hassos up and running would be a simple ālxc initā command away. My view is that this would help those who want the advantages of hassos and the supervisor but also want to run other things on their hardware, but perhaps I am biased because I already run everything else LXD managed containers. Does anyone else have any thoughts?
I totally get that running HA Supervised in an LXC container is not recommended, and I appreciate your response and advice. However, I am not talking about running it in a container. LXD supports running virtual machines as well and containers, and I am suggesting that running HassOS in a virtual machine in this manner is helpful (at least for me). It allows you to use the same tooling as for your containers, and LXD runs VMs through qemu in the background. From my perspective (although I am happy to be proven wrong or learn something new) it allows those of us who are running other containers using LXD/LXC to run HassOS in a VM but manage it using the same commands and tools.
I have also looked into running docker in an LXC container, and perhaps I was not smart enough to understand it but it seemed to cause issues for various reasons, including because both systems use cgroups and you needed to do nasty things like run in privileged containers. That is why I am more interested in running HassOS in a VM under LXD/LXC than trying to get it to work in a container.
All that being said, I really appreciate your comment and, while it is not directly relevant to what I am trying to accomplish, I will take a look a the work Whiskerz did to see if I can learn anything from it.
@mjmccans Iāve been running home Home Assistant core in an LXC container for a year or two. I had tried getting LXC VMs working a year ago when they were in beta, but ran into issues. If you start digging into getting Home Assistant OS running in an LXC VM Iād be interested helping to get that working fully and documented so that it might be officially supported at some point. Give me a ping!
Thank you for your response. I too was running Home Assistant Core for a few years in an LXC container with great results, but I was also interested in trying out the new OpenZWave Beta Add-On and it seemed easiest to do so with Home Assistant OS. I did get the OS mostly running as an LXD VM, but it took a bunch of digging around and I was not very familiar with the build system. I then switched approaches and I am now running Home Assistant Supervised in a Debian 10 VM managed by LXD. It needed a few tweaks here and there to get everything working (for example, to get multicast to flow to the docker images to that the Chromecast integration works) but it has been working well for a month or so. Let me know if you are interested in the approach I have used and I can share my tweaks if they are helpful.
@mjmccans would be great if you can share your findings about multicast and dns ā¦ Iāve forked the installer from Whiskerz and modified it for LXD on a generic debian host (in my case Ubuntu Server) here:
However it currently seems that there are permission issues in regards to DNS and multicast:
root@homeassistant:~# docker logs hassio_dns
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] corefile.sh: executing...
[cont-init.d] corefile.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[ERROR] plugin/mdns: Failed to initialize _services._dns-sd._udp resolver: listen udp4 224.0.0.0:5353: socket: permission denied
Listen: listen tcp :53: socket: permission denied
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
So while itās currently running and accessible via its IP - DNS isnāt working properly so you canāt for example access other lxd containers via the .lxd domain
@thiscantbeserious Unfortunately I donāt think my findings will be directly helpful for you as I was not encountering the same error, but they may help point you in the right direction. The issue with mDNS, as I recall, is that qemu needs to have the interface have trustGuestRxFilters=āyesā set for mDNS traffic to flow through. I was not able to find a way to do that for LXD VMs as part of the VMās configuration, but what I was able to do was to tell systemd to change the settings of the network interfaces when they are brought up so that all multicast traffic flows through. On my system, which is a Ubuntu Focal host, I created a file called /etc/systemd/network/lxd-vm-multicast.network with the following content:
[Match]
Name=mac*
[Link]
AllMulticast=true
This allowed mDNS traffic to flow into the VM, which is what I needed to get the Chromecast integration working (and is likely required for others). The approach I took may be a bit blunt, but it works for my use case as Home Assistant is the only VM that I run and it has been working well for me ever since. I am always open to constructive criticism and I am by no means an expert on this topic, so happy to get any feedback about how I could have done this better if you or anyone else has any better ideas. It took me a lot of Googling around to figure this out, so hopefully some of the keywords above help you out (I made some progress searching ātrustGuestRxFiltersā).
Iām sure you guys know this and have decided to manage your own VM host using Ubuntu (I think?), but proxmox makes this sooooooooooo easy. Like easy enough for someone like me who had no idea what they are doing to stand up a HAOS VM, set up regular automated backups, and host a few other services on the machine in docker running in an lxc. Itās just made for this sort of thing. All in, my host runs the same services with less CPU utilization compared to what I was doing before (core in docker on Ubuntu desktop).
I did have Home Assistant Core working under lxc with macvlan so it can function. However, I take it from your comment you are talking about Home Assistant Supervised in a container and that is something that I never did try so I cannot provide any help with your network-manager issue. I too would much prefer to run Home Assistant under LXD/LXC instead of a VM for performance and resource usage reasons, but I wanted to run a supported installation so I decided to go the VM route. I completely understand why the development team wants to limit which installations are supported, and I am very happy that they are still supporting Supervised on Debian 10. However, in my ideal world I would be able to run Home Assistant Supervised in an LXD container as a supported configuration.
Iād echo everything above about running in lxc. The single qemu VM I have for Home Assistant is easily the biggest use of CPU even when āidleā, as compared to all the other lxc containers I run on the host.
Iām a little confused about https://github.com/home-assistant/supervised-installer - it references qemu for all the machine types, and so I thought that meant it would set up a VM. However, it looks like the installer script pulls docker containers with qemu in their name?
Is there any downside to running the docker install directly inside an lxc container? Does that mean you lose access to the supervisor addons?
When running the qcow2 image on my machine there was no output as well, but the VM was actually running. In addition, it did not show an IP address when running ālxc listā because the lxd-agent is not running in the VM which means that the IP address is not reported back. In fact, it took me an embarrassing amount of time to realize that the VM was actually working despite giving no output. That being said, I did follow the instructions at the following link and was able to connect to the VM through virt-viewer on a Windows machine I had around and was able to get access to the machine that way: https://discuss.linuxcontainers.org/t/vga-console-connect/8814/5
Your experience may be different than mine, but you may want to try to look at your DHCP server or otherwise check out your network to see if the VM has in fact started up. If you can get the IP for the machine you can access the web interface. I hope that this helps.
Your way works very well without the console and network arenāt displaying, but Iām now at the same state : without lxd agent I canāt use hardware (conbee gateway) through the host.
Iām currently not using my strives but am running HassOS directly ā¦ since I didnāt have to much time at hand however see this comment from whiskerz007 about the issue in regards to multicast here:
Yo guyz, what is the situation. So is it possible to run hassos in LXC, i need to pass google coral to the frigate addon and i have some bumps with proxmox. So i think passing the pci device to the CT would be easier.
UPDATE:
I managed to pass google coral to proxmox VM
Well it worked when using a bridged network connection. My only issue is that I can only use lxc console NAME to run commands in the VM and this does not work if you are writing a bash script to automate things. Someone has a workaround?
I bumped into an issue trying to run the script you provide on a debian 10 raspberry pi 4, with module loading. I can pass the overlay module to the lxc, but not aufs. I get: Error: Failed to load kernel module 'aufs': Failed to run: modprobe aufs: modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.0-0.bpo.7-arm64
Iām quite stuck as google doesnāt give me any good results. Would appreciate any help!
UPDATE
My actual issue is that not even overlay storage driver was working for me since I was using ZFS as my LXC storage pool. I had switch to BTRFS ( and also skipped from passing aufs module as I didnāt have this on my arm64 debian kernel 5.10 ā also doesnāt seem to matter as I will be using overlay storage driver for docker )
I bumped into another brick wall, and this time itās about the USB. I cannot seem to passthrough the USB I have on host to the Home Assistant setup.
I have added a USB like so: lxc config device add homeassistant usb1 usb vendorid=<vendorid>
Inside the container I can see that there is a new folder created something like: /dev/bus/usb/001/005
I find this quite weird. There is no serial folder inside dev. Iām not sure why Home Assistant cannot detect the USB. Would appreciate some help!