Install Home Assistant OS with KVM on Ubuntu headless (CLI only)

I’m a newb, playing in late life with Linux and Proxmox etc. I had Proxmox doing it’s thing but wanted to try Ubuntu desktop, KVM and virt manager. GUI.

Difficult, frustrating days of reading and finally prior to your answer, I found another guide, didn’t have a clue what I was doing but, dived in head first after some hesitation. (Didn’t want to have to reinstall Ubuntu “Again” Lol.

Loaded HA VM in virt manager and it assigned a dynamic IP from my router and I can access it outside the host machine, which was my objective. How I got there, being newb and network challenged, I don’t know.:person_shrugging::joy: I bookmarked this guide just in case. Now, static IPs.:thinking:

So my machine lived for about 10 days.

It started to become really slow, load went up on the VM and my host machine. I checked disk usage, memory usage, docker stats, logs etc but couldn’t figure out what what causing it.
It then had trouble starting, ending up in maintenance mode… checked the partitions with fsck all good… successfully booted it again but still slow and heavy load (~5 on 4 cpus).
Turned it off, mounted the qcow2 with qemu-ndb to run additional fscks to no avail.
At this point it wouldn’t boot at all. virsh said domain started but no output to console, nog error messages, no logs from qemu-kvm.service or “machine-qemu\x2d14\x2dhaos.scope” whatever that means.

Gave up, mounted the nbd disk and fished out my config. Creating new vm now.

If only I had the possibility to take a f-ing snapshot!! :rage:
Guess I’ll have to shut HA down once every night to back up the qcow2…

Anyone else had such luck?
If nothing else, let this be a warning to back your stuff up and have a quick restore-path… :roll_eyes:

Update:
Found this log whilst trying to clean up after the old VM:
/var/log/libvirt/qemu/haos.log:

2024-05-01T06:58:15.052247Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-register [bit 8]
2024-05-01T06:58:15.052255Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-vid [bit 9]
2024-05-01T06:58:15.052258Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48DH).vmx-posted-intr [bit 7]
2024-05-01T06:58:15.052261Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
2024-05-01T06:58:15.052264Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13]
2024-05-01T06:58:15.052760Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-register [bit 8]
2024-05-01T06:58:15.052767Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48BH).vmx-apicv-vid [bit 9]
2024-05-01T06:58:15.052770Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48DH).vmx-posted-intr [bit 7]
2024-05-01T06:58:15.052772Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
2024-05-01T06:58:15.052775Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13]
KVM internal error. Suberror: 3
extra data[0]: 80000b0e
extra data[1]: 31
extra data[2]: 182
extra data[3]: 800000
EAX=c0000033 EBX=756e6547 ECX=c0000080 EDX=00000000
ESI=fffce924 EDI=00005042 EBP=fffcc000 ESP=00000000
EIP=fffffe69 EFL=00010002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
CS =0010 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA]
SS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
DS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
FS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
GS =0008 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS64-busy
GDT=     00000000ffffff80 0000001f
IDT=     0000000000000000 0000ffff
CR0=c0000033 CR2=00000000fffffe69 CR3=0000000000800000 CR4=00000660
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000500
Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??

I guess that looks bad :person_shrugging:

I’m thinking to migrate from docker ti VM installation.
I tried virt-install and It all works well.
A doubt: if i have to restart the host machine (for example After and update ) i have First to stop the VM ?

Short answer: I don’t know.

Medium (quite unsatisfying) answer: I just reboot, it’s worked fine for years. Also if there have been power cuts.

Longer answer:

I think you can configure VMs for automatic shutdown, but you might also be able to leverage libvirt-guests which, as I understood it, will be default suspend a VM when the host shuts down, and restore to pre-shutdown state.

I can see that it is running on my machine (output of service libvirt-guests status).

● libvirt-guests.service - Suspend/Resume Running libvirt Guests
     Loaded: loaded (/lib/systemd/system/libvirt-guests.service; enabled; vendor preset: enabled)
     Active: active (exited) since Wed 2024-04-10 10:08:18 UTC; 1 month 2 days ago
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 1636 (code=exited, status=0/SUCCESS)
        CPU: 16ms

Notice: journal has been rotated since unit was started, output may be incomplete.

Since I haven’t made any changes, I think it automatically suspends and restores VMs when I shut down/reboot host.

But again, I’ve had several power cuts to the server over the years, and had no issues with HA when I booted it up again, so I haven’t really worried that much about it lately. I just make sure I always have backups for peace of mind.

When I upgrade the host and in turn need to reboot the host, I first go into the HA VM console and enter ha host shutdown. Although I use virt-manager in my setup to gain access to the console, you should be able to use virsh console NAME-OF-VM, login as root with no password and enter ha host shutdown. It takes a few minutes, and it may complain along the way, but eventually HA stops running leaving the VM in the off state. If virsh doesn’t work, then you could use the HA UI->System->Hardware->Power Button->Advanced Options->Shutdown the system and give it a few minutes to complete.

I use to do a straight VM powerdown and then I started using the guest agent with virt-manager which fed a signal to HAOS to shutdown properly, but either way would later cause HA on startup to sometimes complain about its database and saying HA may not have been shutdown properly, but it worked nevertheless. However ha host shutdown seems to do a clean shutdown without affecting the database.

Tnx.

I’ve been using the vm for a few days and It seems all ok.

I didn’t create the “pool” because I didn’t understand what it was for :slight_smile:

I ran virt-install with this parameters:

virt-install --name ha --description “Home Assistant OS” --os-variant=generic --ram=4096 --vcpus=2,maxvcpus=4 --disk /var/lib/libvirt/images/ha.qcow2,format=qcow2,bus=virtio --import --graphics none --boot uefi --network bridge=br0,model=virtio,mac=52:54:00:11:11:11 --noautoconsole --cpu host-passthrough --virt-type kvm

I ran virt-install and virsh autostart as root: running the vm as user caused me (arch Linux) some issue with autostart because at boot the daemon run as root and it couldn’t find the user VM…

It seems all ok!

Next step could be to run a mariadb container on the host and move the “ha VM” db… I dont know if It can be usefull

Thank you for this!

Can confirm everything works on latest Ubuntu 24.04.

1 Like

I’ve been trying to get this to work on Ubuntu 24.04 for hours but the networking won’t work.

UPDATE: For steps that solved my issue. I got it working:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp1s0:
      dhcp4: false
  bridges:
    br0:
      interfaces: [enp1s0]
      dhcp4: true                        # Enable DHCP
      dhcp6: false                       # Disable IPv6 DHCP

Change thed macaddresspolicy in /usr/lib/systemd/network/99-default.link from persistent to none as described here.

Added the bridge to the docker daemon.json. Not sure if this is needed but it’s working now.

1 Like

Hi, thank you for this decent post with all the information - which unfortunately does not seem to work for my case.

Situation (and objective):

  • run numerous services in docker, e.g. pihole, rsyslog, nginx, etc.
  • run Home Assistant (and other services) in KVM
  • host OS: Ubuntu 24.04

My problem:
(a - “original” tutorial) setting firewall rules (iptables -P FORWARD ALLOW, iptable -A DOCKER-USER -i br0 -o br0 -j ACCEPT, etc)
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
(b - Install Home Assistant OS with KVM on Ubuntu headless (CLI only) - #89 by brady) setting “bridge”: “br0” or “iptables”: false in daemon.json
=> Docker services do not start up properly, at least half of the services need to be started multiple times to come up
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
(c - Install Home Assistant OS with KVM on Ubuntu headless (CLI only) - #14 by oldrobot) “disabling” netfilter
=> No Docker services come up, even the ones without networking!
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.

My config:

/etc/netplan/01-interfaces.yaml

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.76.4/24]
      routes:
        - to: default
          via: 192.168.76.254
    eno2:
      dhcp4: no
      dhcp6: no
 bridges:
    br0:
      dhcp4: no
      dhcp6: no
      interfaces: [ eno2 ]

/etc/libvirt/qemu/networks/vm.xml

<network>
  <name>vm</name>
  <uuid>450538c6-d629-4c0d-bd8f-572de20163e8</uuid>
  <forward mode='bridge'/>
  <bridge name='br0'/>
</network>

I am quite frustrated and being close to continue with VirtualBox, which essentially I wanted to get rid of.

I’m wondering if it might be better if br0 had an IP address using the same subnet as your HAOS VM.

I’ll have to say I don’t see how br0 would be issuing DHCP requests if the netplan has dhcp set to no (unless the backend is somehow not working properly)

Maybe clarify what you mean …do you mean setting a static IP in br0?