I’m a newb, playing in late life with Linux and Proxmox etc. I had Proxmox doing it’s thing but wanted to try Ubuntu desktop, KVM and virt manager. GUI.
Difficult, frustrating days of reading and finally prior to your answer, I found another guide, didn’t have a clue what I was doing but, dived in head first after some hesitation. (Didn’t want to have to reinstall Ubuntu “Again” Lol.
Loaded HA VM in virt manager and it assigned a dynamic IP from my router and I can access it outside the host machine, which was my objective. How I got there, being newb and network challenged, I don’t know. I bookmarked this guide just in case. Now, static IPs.
It started to become really slow, load went up on the VM and my host machine. I checked disk usage, memory usage, docker stats, logs etc but couldn’t figure out what what causing it.
It then had trouble starting, ending up in maintenance mode… checked the partitions with fsck all good… successfully booted it again but still slow and heavy load (~5 on 4 cpus).
Turned it off, mounted the qcow2 with qemu-ndb to run additional fscks to no avail.
At this point it wouldn’t boot at all. virsh said domain started but no output to console, nog error messages, no logs from qemu-kvm.service or “machine-qemu\x2d14\x2dhaos.scope” whatever that means.
Gave up, mounted the nbd disk and fished out my config. Creating new vm now.
If only I had the possibility to take a f-ing snapshot!!
Guess I’ll have to shut HA down once every night to back up the qcow2…
Anyone else had such luck?
If nothing else, let this be a warning to back your stuff up and have a quick restore-path…
Update:
Found this log whilst trying to clean up after the old VM: /var/log/libvirt/qemu/haos.log:
I’m thinking to migrate from docker ti VM installation.
I tried virt-install and It all works well.
A doubt: if i have to restart the host machine (for example After and update ) i have First to stop the VM ?
Medium (quite unsatisfying) answer: I just reboot, it’s worked fine for years. Also if there have been power cuts.
Longer answer:
I think you can configure VMs for automatic shutdown, but you might also be able to leverage libvirt-guests which, as I understood it, will be default suspend a VM when the host shuts down, and restore to pre-shutdown state.
I can see that it is running on my machine (output of service libvirt-guests status).
● libvirt-guests.service - Suspend/Resume Running libvirt Guests
Loaded: loaded (/lib/systemd/system/libvirt-guests.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2024-04-10 10:08:18 UTC; 1 month 2 days ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 1636 (code=exited, status=0/SUCCESS)
CPU: 16ms
Notice: journal has been rotated since unit was started, output may be incomplete.
Since I haven’t made any changes, I think it automatically suspends and restores VMs when I shut down/reboot host.
But again, I’ve had several power cuts to the server over the years, and had no issues with HA when I booted it up again, so I haven’t really worried that much about it lately. I just make sure I always have backups for peace of mind.
When I upgrade the host and in turn need to reboot the host, I first go into the HA VM console and enter ha host shutdown. Although I use virt-manager in my setup to gain access to the console, you should be able to use virsh console NAME-OF-VM, login as root with no password and enter ha host shutdown. It takes a few minutes, and it may complain along the way, but eventually HA stops running leaving the VM in the off state. If virsh doesn’t work, then you could use the HA UI->System->Hardware->Power Button->Advanced Options->Shutdown the system and give it a few minutes to complete.
I use to do a straight VM powerdown and then I started using the guest agent with virt-manager which fed a signal to HAOS to shutdown properly, but either way would later cause HA on startup to sometimes complain about its database and saying HA may not have been shutdown properly, but it worked nevertheless. However ha host shutdown seems to do a clean shutdown without affecting the database.
I ran virt-install and virsh autostart as root: running the vm as user caused me (arch Linux) some issue with autostart because at boot the daemon run as root and it couldn’t find the user VM…
It seems all ok!
Next step could be to run a mariadb container on the host and move the “ha VM” db… I dont know if It can be usefull
Hi, thank you for this decent post with all the information - which unfortunately does not seem to work for my case.
Situation (and objective):
run numerous services in docker, e.g. pihole, rsyslog, nginx, etc.
run Home Assistant (and other services) in KVM
host OS: Ubuntu 24.04
My problem:
(a - “original” tutorial) setting firewall rules (iptables -P FORWARD ALLOW, iptable -A DOCKER-USER -i br0 -o br0 -j ACCEPT, etc)
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
(b - Install Home Assistant OS with KVM on Ubuntu headless (CLI only) - #89 by brady) setting “bridge”: “br0” or “iptables”: false in daemon.json
=> Docker services do not start up properly, at least half of the services need to be started multiple times to come up
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
(c - Install Home Assistant OS with KVM on Ubuntu headless (CLI only) - #14 by oldrobot) “disabling” netfilter
=> No Docker services come up, even the ones without networking!
=> KVM does not get working network. The MAC of the bridge shows up at the router who issues DHCP response, but there is no networking in KVM. Even not when setting static IP in the KVM.
My config:
/etc/netplan/01-interfaces.yaml
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: no
dhcp6: no
addresses: [192.168.76.4/24]
routes:
- to: default
via: 192.168.76.254
eno2:
dhcp4: no
dhcp6: no
bridges:
br0:
dhcp4: no
dhcp6: no
interfaces: [ eno2 ]
I’m wondering if it might be better if br0 had an IP address using the same subnet as your HAOS VM.
I’ll have to say I don’t see how br0 would be issuing DHCP requests if the netplan has dhcp set to no (unless the backend is somehow not working properly)
Maybe clarify what you mean …do you mean setting a static IP in br0?