Install Home Assistant OS with KVM on Ubuntu headless (CLI only)

I need to make one change to the above approach of editing /etc/ufw/before.rules… This works but the ip address for the virtual machine must be set inside Home Assistant. When the router’s dhcp server assigned the ip address, I couldn’t access the virtual machine from the network. The reason seemed to be that the host’s ufw is an enabled on start up and prevented the guest from getting an ip address. This rendered the edits to /etc/ufw/before.rules ineffective. The solution was to assign a static ip address inside Home Assistant.

I like this solution better than disabling netfilter on the bridge. I’m no networking expert, but that sounds like an invitation to all sorts of security problems.

Damn, make sure you downloaded the ova image, I spent whole night dealing with UEFI unable to boot from drive and figured this morning that I downloaded the aarch64 image as I am migrating from rpi4…

@ENT108 May I ask for clarification… what filename did you try that had UEFI problems, and what filename did you use that actually worked?

You mentioned that you are migrating from rpi4, but what are you migrating to…is it x86?

Correct, to i7 that is x86. OVA image is the correct one for x86.

@Aephir
Impossible for my Home Assistant KVM guest to have an IP address by DHCP with the bridge br0.

And the solution was to execute this command in the host (Ubuntu Server) :
iptables -P FORWARD ACCEPT

From this topic

J’ai dû aussi suivre ce guide pour que cela soit persister à chaque démarrage de mon serveur:

1 Like

Hello,
just build HAOS on kvm (I am newbie to kvm). Thanks. Great work.

In the article a storage pool is created and - as far as I can see - not used.
My idea is to have a storage for all config data that will stay when the image is updated.
So does anybody has done this?
Any recipes?

Thanks in advance, Tom

Good to hear!

As I understood it, making a storage pool of type dir means you can host image files, like the *.qcow2 file that is the HAOS image (but I might be wrong).

The guide makes the storage pool at /var/lib/libvirt/images/hassos-vm, then extracts the haos_ova-6.6-qcow2 image in that location before virt-install. Again, as I understood it, this is needed in order to “run” the image from that location.

I’m not sure exactly what “config data” you mean (just what’s in the old config directory?). You typically don’t “update the image” as such, you just update from within HA UI with a click (both core and OS), and here all “config data” stays anyway.

If you really want to re-install the image following this guide (the only situation where you would need the “config data” outside of the image itself), the easiest way is just to do that, and then “restore from backup” (assuming you do routine backups from HA UI, which you really should!).

Hey Aephir,

thanks for clarification.

I can live very well with the idea having a VM which I can upgrade from within.

Looking at /var/lib/libvirt/images/hassos-vm I only can see the qcow2 image I downloaded.
So my idea is that kvm uses this image as the disc space for the VM.

Which is OK for me.

My thought was that the (docker-like) idea was to have
an image with HAOS + storage for config stuff. Upgrading via change of HAOS image.

Thanks.

Best, Tom

A VM is not really like Docker (the HAOS actually runs Docker within it).

If “pure” Docker is your preferred approach, then it’s easy enough to do that without the VM.

HAOS basically just gives you less flexibility (slightly less control and fewer things you can do) in exchange for slightly less hands-on maintenance and less chance to screw something up if you don’t know what you’re doing. It’s up to you what works best for you, but I have found very few things (but a few, to be fair!) I wanted that the OS could not provide. And after having kids, I just have less time to stay on top of things, so the convenience is worth it for me these days.

If it’s because you’d want to automate updates (e.g., at night or when the home is empty and HA downtime is acceptable), you can do that from Home Assistant in a VM as easy as (probably easier than) setting a cron for docker image update.

An example automation (can be built in UI, no need for yaml):
description: "Update HA Core at night"
mode: single
trigger:
  - platform: state
    entity_id:
      - update.home_assistant_core_update
    to: "on"
action:
  - wait_for_trigger:
      - platform: time
        at: "01:11:00"
  - service: update.install
    target:
      entity_id: update.home_assistant_core_update
    data:
      backup: false

This will update core at 01:11:00 if an update is available. Replace update.home_assistant_core_update with update.home_assistant_operating_system_update to update the OS and with update.home_assistant_supervisor_update to update the supervisor.

Minor update, but I believe “–network bridge=br0,model=virtio /” should have a \ at the end of the line instead of /

1 Like

Good catch, and you are of course absolutely right. I have corrected it, thanks!

Thanks for an awesome guide, I found it yesterday evening after a full day of struggling with other ways to install HA. But now in morning time after less than 3h I have HA installed and running but not yet “onboarded”.

My hardware is a Mac Mini (Mid 2011), that I had on the shelf, with a freshly installed Ubuntu Server 22.04.4 LTS. I used the latest HA KVM file to date called haos_ova-12.1.qcow2.xz. This will be my second instance of HA that will located in a summerhouse.

What I did on my Mac/Ubuntu server before starting on this guide:

  • Installed drivers for wifi, not configured yet as I run on ethernet currently - probably it will stay that way too. I guess this is due to the hw being a bit outdated…
  • Installed bluez/bluetooth and set it up.
  • Installed docker, according to this guide. To me unclear if it is needed but I see some docker commands in step 2.1, so I guess so.

Later I plan to buy a Zigbee controller (most likely a Sonoff) and add it to this HA instance.

So far - the only observed annoyance is that it seems to a bit longer (2-4 min) to boot the Ubuntu server due to a wait for Network to be Configured. This wait was not there before completing this guide.

If anyone needs it, I created an ansible playbook that automates this process:

- name: Setup Home Assistant (KVM)
  hosts: localhost
  become: true
  vars:
    vm_name: home-assistant
    vm_memory: 4096
    vm_vcpu: 4
    vm_disk_size: 32
    vm_network: br0
    vm_dir: /pool/vms/HomeAssistant
  tasks:
    - name: Install required packages
      ansible.builtin.apt:
        name:
          - qemu-kvm
          - libvirt-daemon-system
          - libvirt-clients
          - bridge-utils
          - virtinst
          - virt-manager
          - libosinfo-bin
          - libguestfs-tools
        state: present
        update_cache: true

    - name: Enable and start libvirtd
      ansible.builtin.service:
        name: libvirtd
        enabled: true
        state: started

    - name: Check if XML for VM network exists
      ansible.builtin.stat:
        path: /etc/libvirt/qemu/networks/{{ vm_network }}.xml
      register: vm_network_xml

    - name: Create XML for VM network
      ansible.builtin.copy:
        content: |
          <network>
            <name>{{ vm_network }}</name>
            <forward mode="bridge"/>
            <bridge name="{{ vm_network }}"/>
          </network>
        dest: /etc/libvirt/qemu/networks/{{ vm_network }}.xml
        owner: root
        group: root
        mode: '0644'
      when: not vm_network_xml.stat.exists

    - name: Check if VM network exists
      ansible.builtin.command:
        cmd: "virsh net-list --all | grep {{ vm_network }}"
      register: vm_network_status
      changed_when: false
      ignore_errors: true

    - name: Create VM network
      ansible.builtin.command:
        cmd: "virsh net-define /etc/libvirt/qemu/networks/{{ vm_network }}.xml"
      changed_when: false
      when: "not 'active' in vm_network_status.stdout"

    - name: Start VM network
      ansible.builtin.command:
        cmd: "virsh net-start {{ vm_network }}"
      changed_when: false

    - name: Enable VM network
      ansible.builtin.command:
        cmd: "virsh net-autostart {{ vm_network }}"
      changed_when: false

    - name: Reload libvirtd
      ansible.builtin.service:
        name: libvirtd
        enabled: true
        state: reloaded

    - name: Ensure VM directory exists
      ansible.builtin.file:
        path: "{{ vm_dir }}"
        state: directory
        mode: '0755'

    - name: Check if Home Assistant image was already downloaded
      ansible.builtin.stat:
        path: "{{ vm_dir }}/home-assistant.qcow2.xz"
      register: home_assistant_image

    - name: Download Home Assistant image
      ansible.builtin.get_url:
        url: "https://github.com/home-assistant/operating-system/releases/download/12.1/haos_ova-12.1.qcow2.xz"
        dest: "{{ vm_dir }}/home-assistant.qcow2.xz"
        mode: '0755'
      when: not home_assistant_image.stat.exists

    - name: Check if Home Assistant image was already extracted
      ansible.builtin.stat:
        path: "{{ vm_dir }}/home-assistant.qcow2"
      register: home_assistant_image

    - name: Extract Home Assistant image
      ansible.builtin.command:
        cmd: "unxz {{ vm_dir }}/home-assistant.qcow2.xz"
        chdir: "{{ vm_dir }}"
        creates: "{{ vm_dir }}/home-assistant.qcow2"
      when: not home_assistant_image.stat.exists

    - name: Create Home Assistant VM
      ansible.builtin.command:
        cmd: >
          virt-install
          --name "{{ vm_name }}"
          --memory "{{ vm_memory }}"
          --vcpus "{{ vm_vcpu }}"
          --disk "{{ vm_dir }}/home-assistant.qcow2",bus=virtio,format=qcow2
          --network network="{{ vm_network }},model=virtio"
          --osinfo detect=on,require=off
          --boot uefi
          --cpu host
          --graphics none
          --wait 0
          --noautoconsole
          --import
        creates: "/etc/libvirt/qemu/{{ vm_name }}.xml"
      changed_when: false

I also created a playbook for creating the bridge:

- name: Setup netplan
  hosts: localhost
  become: true
  vars:
    interface_name: "enp1s0"
    bridge_name: "br0"
    homeserver_ipv4: "10.0.0.21"
    gateway_ipv4: "10.0.0.1"
  tasks:
    - name: Chmod netplan files
      ansible.builtin.command:
        cmd: "chmod 400 /etc/netplan/*.yaml"
      check_mode: true
      changed_when: false

    - name: Configure network bridge with Netplan
      ansible.builtin.copy:
        dest: "/etc/netplan/00-homeserver.yaml"
        content: |
          network:
            version: 2
            renderer: networkd
            ethernets:
              {{ interface_name }}:
                dhcp4: false
            bridges:
              br0:
                interfaces: [{{ interface_name }}]
                addresses: [{{ homeserver_ipv4 }}/24]
                routes:
                  - to: default
                    via: {{ gateway_ipv4 }}
                nameservers:
                  addresses: [1.1.1.1, 8.8.8.8]
                parameters:
                  stp: true
                  forward-delay: 4
                dhcp4: true
                dhcp6: false
        mode: "0400"

    - name: Generate Netplan configuration
      ansible.builtin.command:
        cmd: "netplan generate"
      changed_when: false

    - name: Apply Netplan configuration
      ansible.builtin.command:
        cmd: "netplan apply"
      changed_when: false

    - name: Restart the network service
      ansible.builtin.systemd:
        name: systemd-networkd
        state: restarted

Hope it helps anyone :slight_smile:

1 Like

i was also having same issues while passing the internal Bluetooth adapter of host pc to homeassistant vm instance and like you did i had detach and reattach Bluetooth adapter like you did.

to avoid doing it manually every often i used systemd-run to run the commands every six hours. this seems to have solved the adapter becoming unavailable for the time being. Granted the solution is a cheeky one.

$ systemd-run --on-boot=300 --on-unit-active=21600 virsh detach-device hass --file /var/lib/libvirt/images/hassos-vm/btattach.xml --persistent

$ systemd-run --on-boot=302 --on-unit-active=21602 virsh detach-device hass --file /var/lib/libvirt/images/hassos-vm/btattach.xml --persistent

hass being name of the vm instance and btattach.xml being the configuration file for Bluetooth adapter that i was passing through to the VM in my case

Thank you for this brilliant post. This worked quite well for me on my Arch server, with some minor modifications.

One strong suggestion is to not run HAOS as root. While the VM should contain the guest in theory, bugs do sometimes exist, and there is no strong reason for the host to need root access. As a point of comparison, I installed several other services on my server (as bare metal), and on installation, the package manager would automatically create a new user to run the programs. These include nextcloud, tt-rss, airsonic, navidrome, nagios, prosody. For HassOS, I created a new user (hass), added it to the libvirt and kvm groups, installed the image to /home/hass/hassos-vm, and ran everything like sudo /usr/bin/runuser -u hass -- virsh start hass. This worked perfectly fine.

FWIW I also found I didn’t need to do turn off the firewall in docker. I’m not really sure why this would be necessary, since HAOS/KVM doesn’t work in a docker instance.

Also, I connect to the network with systemd-networkd, so I used the instructions on the Arch wiki to create the bridge. This also worked well.

In response to this question:

NOTE: I assigned a DHCP of 192.168.1.115, the same IP I set for the br0. I’m not certain that it is required to do both. If anyone tries this, could you please report back so I can update?

I used different IPs, and this worked. The br0 IP will connect to the host. You can also set a different static IP for the HAOS guest. You probably want two different IPs here anyway.

Thanks again for the great guide.

Not needed, the steps that mention Docker in the guide are just needed in case you do have Docker installed as I, and many others do (the main reason to do a VM on Ubuntu is that the server can the also be used for other things).

Thanks for the guide, especially the docker iptables workaround which definitely would’ve taken me ages to figure out :sweat_smile:

Here’s the command I ran on my Ubuntu 20.04 machine:

$ sudo virt-install --name haos \
--os-variant debian10 \
--vcpus 4 \
--memory 4096 \
--network bridge=br0,model=virtio \
--graphics none \
--disk path=/<my_storage_path>/vms/haos.qcow2,format=qcow2,bus=virtio \
--boot uefi

My versions:

$ virsh --version
6.0.0
$ virt-install --version
2.2.1
$ qemu-system-x86_64 --version
QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.28)

The vm started nicely with console output. Pro tip for “Escape character is ^]” for european layouts: On my swedish apple keyboard this corresponds to ctrl+å.

I started off trying to make a snapshot of the vm but got this error:

sudo virsh snapshot-create-as haos snapshot_1 --description "Initial snapshot"
error: Operation not supported: internal snapshots of a VM with pflash based firmware are not supported

After a quick search it seems this is due to the UEFI firmware…? Okay… but how do you guys take snapshots?

Good to hear it was useful.

Actually, I never set up snapshots. I’ve just set up automatic backup from within HA, and hooked that up to my Nextcloud (using this addon). I also tried restoring a newly installed VM from that backup, and it was very painless.

So sorry, no help other than that I’d set up automatic backups from within HA if you can’t do VM snapshots.

Newb, anyone know if this bridge instruction works with Ubuntu 24.04? kvm is installed with virtual manager, things look a little different in /etc/netplan, but similar. Mines eno1, i can do the haos install with the GUI, after the bridge is setup. So many guides out there. Already failed using NAT with haos and not knowing any better.

I haven’t tried 24.04, so no real insight here. But I don’t know of any specific reason why it shouldn’t work. If you try, please let us know how it does.

1 Like