Install Home Assistant OS with KVM on Ubuntu headless (CLI only)

I’m a newb, playing in late life with Linux and Proxmox etc. I had Proxmox doing it’s thing but wanted to try Ubuntu desktop, KVM and virt manager. GUI.

Difficult, frustrating days of reading and finally prior to your answer, I found another guide, didn’t have a clue what I was doing but, dived in head first after some hesitation. (Didn’t want to have to reinstall Ubuntu “Again” Lol.

Loaded HA VM in virt manager and it assigned a dynamic IP from my router and I can access it outside the host machine, which was my objective. How I got there, being newb and network challenged, I don’t know.:person_shrugging::joy: I bookmarked this guide just in case. Now, static IPs.:thinking:

So my machine lived for about 10 days.

It started to become really slow, load went up on the VM and my host machine. I checked disk usage, memory usage, docker stats, logs etc but couldn’t figure out what what causing it.
It then had trouble starting, ending up in maintenance mode… checked the partitions with fsck all good… successfully booted it again but still slow and heavy load (~5 on 4 cpus).
Turned it off, mounted the qcow2 with qemu-ndb to run additional fscks to no avail.
At this point it wouldn’t boot at all. virsh said domain started but no output to console, nog error messages, no logs from qemu-kvm.service or “machine-qemu\x2d14\x2dhaos.scope” whatever that means.

Gave up, mounted the nbd disk and fished out my config. Creating new vm now.

If only I had the possibility to take a f-ing snapshot!! :rage:
Guess I’ll have to shut HA down once every night to back up the qcow2…

Anyone else had such luck?
If nothing else, let this be a warning to back your stuff up and have a quick restore-path… :roll_eyes:

Update:
Found this log whilst trying to clean up after the old VM:
/var/log/libvirt/qemu/haos.log:

2024-05-01T06:58:15.052247Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-register [bit 8]
2024-05-01T06:58:15.052255Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-vid [bit 9]
2024-05-01T06:58:15.052258Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48DH).vmx-posted-intr [bit 7]
2024-05-01T06:58:15.052261Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
2024-05-01T06:58:15.052264Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13]
2024-05-01T06:58:15.052760Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-register [bit 8]
2024-05-01T06:58:15.052767Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-vid [bit 9]
2024-05-01T06:58:15.052770Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48DH).vmx-posted-intr [bit 7]
2024-05-01T06:58:15.052772Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
2024-05-01T06:58:15.052775Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13]
KVM internal error. Suberror: 3
extra data[0]: 80000b0e
extra data[1]: 31
extra data[2]: 182
extra data[3]: 800000
EAX=c0000033 EBX=756e6547 ECX=c0000080 EDX=00000000
ESI=fffce924 EDI=00005042 EBP=fffcc000 ESP=00000000
EIP=fffffe69 EFL=00010002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
CS =0010 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA]
SS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
DS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
FS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
GS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS64-busy
GDT=     00000000ffffff80 0000001f
IDT=     0000000000000000 0000ffff
CR0=c0000033 CR2=00000000fffffe69 CR3=0000000000800000 CR4=00000660
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000500
Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??

I guess that looks bad :person_shrugging:

I’m thinking to migrate from docker ti VM installation.
I tried virt-install and It all works well.
A doubt: if i have to restart the host machine (for example After and update ) i have First to stop the VM ?

Short answer: I don’t know.

Medium (quite unsatisfying) answer: I just reboot, it’s worked fine for years. Also if there have been power cuts.

Longer answer:

I think you can configure VMs for automatic shutdown, but you might also be able to leverage libvirt-guests which, as I understood it, will be default suspend a VM when the host shuts down, and restore to pre-shutdown state.

I can see that it is running on my machine (output of service libvirt-guests status).

● libvirt-guests.service - Suspend/Resume Running libvirt Guests
     Loaded: loaded (/lib/systemd/system/libvirt-guests.service; enabled; vendor preset: enabled)
     Active: active (exited) since Wed 2024-04-10 10:08:18 UTC; 1 month 2 days ago
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 1636 (code=exited, status=0/SUCCESS)
        CPU: 16ms

Notice: journal has been rotated since unit was started, output may be incomplete.

Since I haven’t made any changes, I think it automatically suspends and restores VMs when I shut down/reboot host.

But again, I’ve had several power cuts to the server over the years, and had no issues with HA when I booted it up again, so I haven’t really worried that much about it lately. I just make sure I always have backups for peace of mind.

When I upgrade the host and in turn need to reboot the host, I first go into the HA VM console and enter ha host shutdown. Although I use virt-manager in my setup to gain access to the console, you should be able to use virsh console NAME-OF-VM, login as root with no password and enter ha host shutdown. It takes a few minutes, and it may complain along the way, but eventually HA stops running leaving the VM in the off state. If virsh doesn’t work, then you could use the HA UI->System->Hardware->Power Button->Advanced Options->Shutdown the system and give it a few minutes to complete.

I use to do a straight VM powerdown and then I started using the guest agent with virt-manager which fed a signal to HAOS to shutdown properly, but either way would later cause HA on startup to sometimes complain about its database and saying HA may not have been shutdown properly, but it worked nevertheless. However ha host shutdown seems to do a clean shutdown without affecting the database.

Tnx.

I’ve been using the vm for a few days and It seems all ok.

I didn’t create the “pool” because I didn’t understand what it was for :slight_smile:

I ran virt-install with this parameters:

virt-install --name ha --description “Home Assistant OS” --os-variant=generic --ram=4096 --vcpus=2,maxvcpus=4 --disk /var/lib/libvirt/images/ha.qcow2,format=qcow2,bus=virtio --import --graphics none --boot uefi --network bridge=br0,model=virtio,mac=52:54:00:11:11:11 --noautoconsole --cpu host-passthrough --virt-type kvm

I ran virt-install and virsh autostart as root: running the vm as user caused me (arch Linux) some issue with autostart because at boot the daemon run as root and it couldn’t find the user VM…

It seems all ok!

Next step could be to run a mariadb container on the host and move the “ha VM” db… I dont know if It can be usefull

Thank you for this!

Can confirm everything works on latest Ubuntu 24.04.

1 Like

I’ve been trying to get this to work on Ubuntu 24.04 for hours but the networking won’t work.

UPDATE: For steps that solved my issue. I got it working:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp1s0:
      dhcp4: false
  bridges:
    br0:
      interfaces: [enp1s0]
      dhcp4: true                        # Enable DHCP
      dhcp6: false                       # Disable IPv6 DHCP

Change thed macaddresspolicy in /usr/lib/systemd/network/99-default.link from persistent to none as described here.

Added the bridge to the docker daemon.json. Not sure if this is needed but it’s working now.

1 Like

Hi, thank you for this decent post with all the information - which unfortunately does not seem to work for my case.

Situation (and objective):

  • run numerous services in docker, e.g. pihole, rsyslog, nginx, etc.
  • run Home Assistant (and other services) in KVM
  • host OS: Ubuntu 24.04

My problem:
(a - “original” tutorial) setting firewall rules (iptables -P FORWARD ALLOW, iptable -A DOCKER-USER -i br0 -o br0 -j ACCEPT, etc)
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
(b - Install Home Assistant OS with KVM on Ubuntu headless (CLI only) - #89 by brady) setting “bridge”: “br0” or “iptables”: false in daemon.json
=> Docker services do not start up properly, at least half of the services need to be started multiple times to come up
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
(c - Install Home Assistant OS with KVM on Ubuntu headless (CLI only) - #14 by oldrobot) “disabling” netfilter
=> No Docker services come up, even the ones without networking!
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.

My config:

/etc/netplan/01-interfaces.yaml

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.76.4/24]
      routes:
        - to: default
          via: 192.168.76.254
    eno2:
      dhcp4: no
      dhcp6: no
 bridges:
    br0:
      dhcp4: no
      dhcp6: no
      interfaces: [ eno2 ]

/etc/libvirt/qemu/networks/vm.xml

<network>
  <name>vm</name>
  <uuid>450538c6-d629-4c0d-bd8f-572de20163e8</uuid>
  <forward mode='bridge'/>
  <bridge name='br0'/>
</network>

I am quite frustrated and being close to continue with VirtualBox, which essentially I wanted to get rid of.

I’m wondering if it might be better if br0 had an IP address using the same subnet as your HAOS VM.

I’ll have to say I don’t see how br0 would be issuing DHCP requests if the netplan has dhcp set to no (unless the backend is somehow not working properly)

Maybe clarify what you mean …do you mean setting a static IP in br0?

I’m tinkering with this for quite some time now and still got it not working:

fresh ubuntu 24.04
my netplan:

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: false
  bridges:
    br0:
      interfaces: [eno1]
      dhcp4: true                        # Enable DHCP
      dhcp6: false                       # Disable IPv6 DHCP

/usr/lib/systemd/network/99-default.link


[Match]
OriginalName=*

[Link]
NamePolicy=keep kernel database onboard slot path
AlternativeNamesPolicy=database onboard slot path
MACAddressPolicy=none

/etc/docker/daemon.json

{
  "bridge": "br0"
}

ip a

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 
    altname enp0s31f6
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 
    inet 192.168.1.198/24 metric 100 brd 192.168.1.255 scope global dynamic br0
       valid_lft 40816sec preferred_lft 40816sec
    inet6 fd6e:3272:702b::d71/128 scope global dynamic noprefixroute 
       valid_lft 40819sec preferred_lft 40819sec
    inet6 fd6e:3272:702b:0:741f:3aff:fe29:86ca/64 scope global mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::741f:3aff:fe29:86ca/64 scope link 
       valid_lft forever preferred_lft forever
4: fake0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 
    inet 1.2.3.4/24 brd 1.2.3.255 scope global noprefixroute fake0
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

command to install image

virt-install --name home-assistant --ram 4096 --vcpus 2 --disk path=/mnt/volume_01/vm/home-assistant/hoas-raw.img,format=raw --os-type linux --os-variant generic --network bridge=br0 --graphics none  --boot uefi --hostdev 001.002 --import

I also stoped all firewall services:

sudo systemctl stop iptables
Failed to stop iptables.service: Unit iptables.service not loaded.
sudo systemctl stop nftables
sudo systemctl stop firewalld
Failed to stop firewalld.service: Unit firewalld.service not loaded.
sudo systemctl stop iptables
Failed to stop iptables.service: Unit iptables.service not loaded.

what am I missing?

What is not working … networking?
When you entered ip a was the VM running? there should be an interface something like vnet0 which should be master/slave to br0.

sry for the late reply.
the problem was that the VM got no IP.
the mater to br0 is eno1.
it too some hours and now the vm got a IP. did not change anything but it took forever
and vnet0 is also up

13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
    link/ether 
    inet6 
       valid_lft forever preferred_lft forever

now I only have to fix my docker network and I’m happy. thanks

The thing that fixed routing to the VM for me was adding one simple iptables rule:

iptables -A FORWARD -d -j ACCEPT

This will allowe traffic coming from outside to be fowarded to the VM. I have also set a static IP from inside the VM.

Also check:

cat /proc/sys/net/ipv4/ip_forward

is set to 1.

Correct me if I’m wrong, but I think yours is a case where your setup has multiple networking interfaces (each having their own separate IPv4 subnet address), and your HA VM is “bridged” to one of those interfaces, in which case yes you would have to IP Forward/route from one of the other interfaces to reach your VM. The reason I bring this up, is that for users that only have a single network interface with their VM bridged to it, then these IP forwarding configs should not be needed.

After many hours of trying and failing to get a KVM instance of HomeAssistant running properly on Ubuntu server 22.04, I finally found these instructions. I followed them pretty closely and have had success! I needed to change some permissions on the /var/lib/libvirt/images/ directory in order to be able to access it properly, and did not need to do the storage pool nor the docker configurations.

Thank you so much for these instructions!

1 Like

Hi,
I have some updates to those instructions and more details why CPU usage may be higher and how to prevent that.

  • First of all, the newest HassOS images (I use Hassos 15.2 image) have instructions to install headless and also the login with console works. So if you enter virsh console <vmname>, you can successfully connect to console and login as root without password. It shows nothing after connection, just press return one time to show the login prompt.
  • If you follow the steps on the installation page of HASS and/or this howto, the VM has no soundcard with Ubuntu 24.04, so removing it is not required. Still the VM has high CPU usage as soon as you add zigbee devices by passing the stick as USB device. Actually I found out why this happens and what you can do!

Basically the problem is that the above instructions and also the ones from HASS installation manual create a VM with a very outdated hardware version using the Intel 440FX chipset. This has no PCI express and old USB 1.1/2 controllers that have large virtualization overhead.

To get an updated machine with modern hardware, you should pass “q35” as machine type when creating the machine (add --machine q35 to virt-install command line). You can change it later but it requires manual XML editing (removing all pci devices and addresses and waiting for libvirt to add them back). So it is better to add “pc-q35” as type from beginning. After you have done this, all is fast from beginning.

I still have another recommenadation: Instead of adding the USB sticks for zigbee as an USB device, it is better (and again faster) to just add a COM2 port to the VM instead of a USB device by passing the unqie device path to the config:

    <serial type='dev'>
      <source path='/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_d00c11e35312ef1180786db8bf9df066-if00-port0'/>
      <target type='isa-serial' port='1'>
        <model name='isa-serial'/>
      </target>
    </serial>

Just replace the path and unique identifier of your device with the correct path for the host system; port=1 is the the port number (starting at 0, the console is 0).

I strongly recommend to not add a path like /dev/ttyS... to the XML file, because the enumeration of COM ports changes depending on how fast the Linux kernel find the devices during booting the hypervisor! So check /dev/serial/by-id/ dircetory and copypaste the correct device file names and insert those as <source> into the libvirt XML! This is different from the hardcoded ISA serial port slots, they are hardcoded, so inside the VM static names are fine (see below).

After that the VM sees a 2nd serial port that directly points to the zigbee stick. This has much less virtualization overhead especially for this heavy used device. After rebooting the VM, the current HassOS does not find the device anymore (because the USB device is going away). To fix this in ZHA, go to the Zigbee ZHA settings and it will automatically ask you to update the device path. Choose /dev/ttyS1 in popup window (this is the second com port, possibly adapt to the XML config if you have more devices).

Keep in mind, you can do this with all USB sticks that expose a serial interface. If you have more than 3, you can add more as “pci-serial” instead of “ide-serial”. The default IDE bus only supports up to 4 serial ports. The first one is for the console (virsh console <vm>).

This last trick was inspired by Alternative KVM HAOS USB zigbee adapter config, but using “pci-serial” is not needed unless you have too many devices. This spares resources even more by using the default ISA serial controller on the emulated mainboard.

Basically you can implement both options (q35 machine OR isa-serial to pass zigbee), but it is enough to use one of the two options. I recomment to start with q35 machine type from the beginning and use the direct serial pass-through as extra non-mandatory optimization. IMHO, I like it more that the HassOS VM just uses hardcoded COM ports and the setup of WHICH device is added to which virtual COM port is done in the hypervisor outside of HASS. This makes it easier when you replace hardware. You just need to adapt the path to the serial port on the hypervisior’s XML file.

Uwe

2 Likes

Thanks so much for this. This guide still works in 2025! :slight_smile: