Running HassOS as an LXD/LXC virtual machine

Hi,

I try to launch a vm from the qcow2 image but nothing happen after the vm starting.

@mjmccans can you share your way to run it ?

When running the qcow2 image on my machine there was no output as well, but the VM was actually running. In addition, it did not show an IP address when running ‘lxc list’ because the lxd-agent is not running in the VM which means that the IP address is not reported back. In fact, it took me an embarrassing amount of time to realize that the VM was actually working despite giving no output. That being said, I did follow the instructions at the following link and was able to connect to the VM through virt-viewer on a Windows machine I had around and was able to get access to the machine that way: https://discuss.linuxcontainers.org/t/vga-console-connect/8814/5

Your experience may be different than mine, but you may want to try to look at your DHCP server or otherwise check out your network to see if the VM has in fact started up. If you can get the IP for the machine you can access the web interface. I hope that this helps.

Thank you @mjmccans

Your way works very well without the console and network aren’t displaying, but I’m now at the same state : without lxd agent I can’t use hardware (conbee gateway) through the host.

I’m currently not using my strives but am running HassOS directly … since I didn’t have to much time at hand however see this comment from whiskerz007 about the issue in regards to multicast here:

Yo guyz, what is the situation. So is it possible to run hassos in LXC, i need to pass google coral to the frigate addon and i have some bumps with proxmox. So i think passing the pci device to the CT would be easier.
UPDATE:
I managed to pass google coral to proxmox VM

Hi guys, so far I managed to run the operating system on an LXD virtual machine but I still have some problems. This is what I did so far:

After installing LXD using Snap and initializing it:

  1. Downloaded HA OS qcow2 image from here
  2. Created a metadata.yaml file with the following contents:
architecture: x86_64
creation_date: 1624888256
properties:
  description: Home Assistant image
  os: Debian
  release: buster 10.10
  1. Compress the metadata.yaml file: tar -cvzf metadata.tar.gz metadata.yaml
  2. Create the LXC image: lxc image import metadata.tar.gz haos_ova.qcow2 --alias hassio
  3. Create LXC VM from the imported image: lxc launch hassio ha --vm -c security.secureboot=false

At this point I can do lxc console ha to get into the HA OS VM’s console. Two problems I have right now.

  1. Once I get into the console and login as root (without password), I go straight into the # prompt, skipping the hassio prompt.
  2. The IP address is not obtained for the instance from outside the VM. I need this of course to access Home Assistant service on the browser.

Any help with this is greatly appreciated! I also posted in here for assistance:

VM from disk Image - LXD - Linux Containers Forum
LXD create VM for Hassio · Issue #118 · whiskerz007/proxmox_hassos_install (github.com)

Well it worked when using a bridged network connection. My only issue is that I can only use lxc console NAME to run commands in the VM and this does not work if you are writing a bash script to automate things. Someone has a workaround?

I bumped into an issue trying to run the script you provide on a debian 10 raspberry pi 4, with module loading. I can pass the overlay module to the lxc, but not aufs. I get:
Error: Failed to load kernel module 'aufs': Failed to run: modprobe aufs: modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.0-0.bpo.7-arm64

I’m quite stuck as google doesn’t give me any good results. Would appreciate any help!

UPDATE

My actual issue is that not even overlay storage driver was working for me since I was using ZFS as my LXC storage pool. I had switch to BTRFS ( and also skipped from passing aufs module as I didn’t have this on my arm64 debian kernel 5.10 – also doesn’t seem to matter as I will be using overlay storage driver for docker )

I bumped into another brick wall, and this time it’s about the USB. I cannot seem to passthrough the USB I have on host to the Home Assistant setup.

I have added a USB like so:
lxc config device add homeassistant usb1 usb vendorid=<vendorid>

Inside the container I can see that there is a new folder created something like: /dev/bus/usb/001/005
I find this quite weird. There is no serial folder inside dev. I’m not sure why Home Assistant cannot detect the USB. Would appreciate some help!

how did this all end up working for you in the end? also, why not just run the supervised installer inside of a lxc debian image?

Try adding your usb device like this:

lxc config device add homeassistant ttyusb unix-char path=/dev/ttyACM0

Figured its much simplier to just run Hass OS … and the Portainer Addon for my purposes.

Whenever I need anything I can simply pull the Image from Dockerhub and create an Container easily besides Homeassistant on the same Host.

Messing with LXD was a nice try but far from stable / complete … Homeassistant is moving much to fast for that so it’s best to stick as close to their supported installations I guess.

I’d use their VM-Images next time instead If I’d want to run it on something more powerfull.

I got Home Assistant OS running in LXD without too much difficulty, and wrote up my notes here:

Hopefully that’s of help to anyone who finds this thread looking for a guide.

2 Likes

Very nice! I actually asked about this a month back and @tteck (famous for his proxmox setup repo) told me it was technically impossible, so I gave up and put HASS in a VM. I don’t see why LXD would be materially different than Proxmox’s implementation of base lxc.

You didn’t set that stuff up, but I see no reason why USB passthrough and nested docker (HASS OS “addons” are actually docker containers) wouldn’t work too.

Edit: Nevermind, you’re running a VM, not a container. Womp womp.

1 Like

I’ve updated the title of the blog post to hopefully avoid future container vs VM confusion.

1 Like

Totally my fault, you did say VM. I keep forgetting LXD does both lxc and VMs now. Used to run it myself before I moved to proxmox.

hey!
I am begginer in System Containers world and coming here with concept that I believe many of you been already thinking about, this is:

  • would make it sens to run HAOS inside LXC (System Container)? or it is even not possible/highly not recommended?

Simple saying, I like characteristic that LXC does require less resourcess.

This worked really well for me, despite my having no lxd experience.

My challenge now is that HA only has a bridged (NAT) network connection, and lxd won’t allow port forwarding to VMs.

Does anyone have any suggestions? I feel like I’m missing something.

Do you mean you want to forward a port from your router to HA for external access? If so, all I think you need is for the HA VM to have its own IP address on your LAN, and the rest is a question of configuring your router. If the HA VM has a bridged-mode network connection, then it should have received its own IP address. I understand bridged-mode and NAT as being mutually exclusive - you are using either one or the other, not both.

Thanks Sean, I wasn’t at all clear enough in my question.

What I meant is that I want my lxd vm of haos to be visibly externally on the network so I can get to the web page from other machines, and I suspect it will help with device discovery.

Currently my VM has network access but only through the lxd bridge which does NAT to get out onto the network. It thus has an IP address that falls within the range allocated to the lxd controlled network, but is not visible outside the host it is running on.

The way I’d imagined making my haos VM visible on the network was to port forward the host machine ports to haos instance, but this apparently isn’t possible because the instance is a VM. This seems to be a limitation of lxd.

I suspect now my options are

  1. to use macvlan to give one of my host’s network adapters 2 IP addresses, but this requires the rest of the network equipment to accept promiscuousness.
  2. to dedicate the wifi device to the haos instance

I’m hoping there’s a better option though, and imagined that someone else may have solved this problem already.

Thanks again for your help, and for the great instructions for getting this far.

I don’t think we are on the same page regarding bridge mode vs NAT. I have HAOS running in an LXD VM that has got a real LAN IP address (10.0.0.206 as you can see in the screenshot in my blog post) via DHCP from my router. I did this by configuring bridge mode networking. There is no NAT happening in LXD or on the server it’s running on, i.e., every container and VM gets its own IP address on the LAN.

I split out the steps I took to get bridge mode networking set up into its own blog post. There’s a link in the blog post above, but here’s the direct link:

The first part covers covers configuring bridge networking as an alternative to macvlan.

Is that of any help?