When running the qcow2 image on my machine there was no output as well, but the VM was actually running. In addition, it did not show an IP address when running ‘lxc list’ because the lxd-agent is not running in the VM which means that the IP address is not reported back. In fact, it took me an embarrassing amount of time to realize that the VM was actually working despite giving no output. That being said, I did follow the instructions at the following link and was able to connect to the VM through virt-viewer on a Windows machine I had around and was able to get access to the machine that way: https://discuss.linuxcontainers.org/t/vga-console-connect/8814/5
Your experience may be different than mine, but you may want to try to look at your DHCP server or otherwise check out your network to see if the VM has in fact started up. If you can get the IP for the machine you can access the web interface. I hope that this helps.
Your way works very well without the console and network aren’t displaying, but I’m now at the same state : without lxd agent I can’t use hardware (conbee gateway) through the host.
I’m currently not using my strives but am running HassOS directly … since I didn’t have to much time at hand however see this comment from whiskerz007 about the issue in regards to multicast here:
Yo guyz, what is the situation. So is it possible to run hassos in LXC, i need to pass google coral to the frigate addon and i have some bumps with proxmox. So i think passing the pci device to the CT would be easier.
UPDATE:
I managed to pass google coral to proxmox VM
Well it worked when using a bridged network connection. My only issue is that I can only use lxc console NAME to run commands in the VM and this does not work if you are writing a bash script to automate things. Someone has a workaround?
I bumped into an issue trying to run the script you provide on a debian 10 raspberry pi 4, with module loading. I can pass the overlay module to the lxc, but not aufs. I get: Error: Failed to load kernel module 'aufs': Failed to run: modprobe aufs: modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.0-0.bpo.7-arm64
I’m quite stuck as google doesn’t give me any good results. Would appreciate any help!
UPDATE
My actual issue is that not even overlay storage driver was working for me since I was using ZFS as my LXC storage pool. I had switch to BTRFS ( and also skipped from passing aufs module as I didn’t have this on my arm64 debian kernel 5.10 – also doesn’t seem to matter as I will be using overlay storage driver for docker )
I bumped into another brick wall, and this time it’s about the USB. I cannot seem to passthrough the USB I have on host to the Home Assistant setup.
I have added a USB like so: lxc config device add homeassistant usb1 usb vendorid=<vendorid>
Inside the container I can see that there is a new folder created something like: /dev/bus/usb/001/005
I find this quite weird. There is no serial folder inside dev. I’m not sure why Home Assistant cannot detect the USB. Would appreciate some help!
Figured its much simplier to just run Hass OS … and the Portainer Addon for my purposes.
Whenever I need anything I can simply pull the Image from Dockerhub and create an Container easily besides Homeassistant on the same Host.
Messing with LXD was a nice try but far from stable / complete … Homeassistant is moving much to fast for that so it’s best to stick as close to their supported installations I guess.
I’d use their VM-Images next time instead If I’d want to run it on something more powerfull.
Very nice! I actually asked about this a month back and @tteck (famous for his proxmox setup repo) told me it was technically impossible, so I gave up and put HASS in a VM. I don’t see why LXD would be materially different than Proxmox’s implementation of base lxc.
You didn’t set that stuff up, but I see no reason why USB passthrough and nested docker (HASS OS “addons” are actually docker containers) wouldn’t work too.
Edit: Nevermind, you’re running a VM, not a container. Womp womp.
Do you mean you want to forward a port from your router to HA for external access? If so, all I think you need is for the HA VM to have its own IP address on your LAN, and the rest is a question of configuring your router. If the HA VM has a bridged-mode network connection, then it should have received its own IP address. I understand bridged-mode and NAT as being mutually exclusive - you are using either one or the other, not both.
Thanks Sean, I wasn’t at all clear enough in my question.
What I meant is that I want my lxd vm of haos to be visibly externally on the network so I can get to the web page from other machines, and I suspect it will help with device discovery.
Currently my VM has network access but only through the lxd bridge which does NAT to get out onto the network. It thus has an IP address that falls within the range allocated to the lxd controlled network, but is not visible outside the host it is running on.
The way I’d imagined making my haos VM visible on the network was to port forward the host machine ports to haos instance, but this apparently isn’t possible because the instance is a VM. This seems to be a limitation of lxd.
I suspect now my options are
to use macvlan to give one of my host’s network adapters 2 IP addresses, but this requires the rest of the network equipment to accept promiscuousness.
to dedicate the wifi device to the haos instance
I’m hoping there’s a better option though, and imagined that someone else may have solved this problem already.
Thanks again for your help, and for the great instructions for getting this far.
I don’t think we are on the same page regarding bridge mode vs NAT. I have HAOS running in an LXD VM that has got a real LAN IP address (10.0.0.206 as you can see in the screenshot in my blog post) via DHCP from my router. I did this by configuring bridge mode networking. There is no NAT happening in LXD or on the server it’s running on, i.e., every container and VM gets its own IP address on the LAN.
I split out the steps I took to get bridge mode networking set up into its own blog post. There’s a link in the blog post above, but here’s the direct link:
The first part covers covers configuring bridge networking as an alternative to macvlan.
Thanks Sean. I think you’re right, there’s something I’m not understanding about the lxd networking process, as my VM is only getting internal (lxd managed) IP addresses, and isn’t visible to my router. I’ll have a look at the link you sent.