If this is reproducible for you on every update, I’d suggest to manually downgrade and upgrade again to see if it is reproducible. You can downgrade using ha os update --version 11.0. It would be interesting to monitor the console while updating, to see where it gets stuck exactly. The serial console is not the primary console, so you won’t see the systemd startup procedure. You can make the serial console the primary one by changing console=ttyS0 console=tty1 to console=tty1 console=ttyS0 (reverse the order) in /mnt/boot/cmdline.txt. With that you should see the systemd boot messages on the serial console as well.
Before doing all this, I’d suggest to make a copy of the image just to be safe
Downgraded and updated again, stuck at reboot: Restarting system after OS update. Here my actions:
Backed up my HA and copied the image.
Connected to instance (virsh console hass)
Edited file /mnt/boot/cmdline.txt to console=tty1 console=ttyS0.
Successfully executed ha os update --version 11.0, after that instance successfully rebooted itself.
Waited for os boot, then checked OS version (11.0).
Then executed ha os update, waiting for the shutdown. Then (as I see it) the instance got stuck on reboot: Restarting system after all the Unmounted and Stopped:
...
[ OK ] Reached target Unmount All Filesystems.
[ OK ] Stopped File System Check …dev/disk/by-label/hassos-data.
[ OK ] Removed slice Slice /system/systemd-fsck.
[ OK ] Stopped target Preparation for Local File Systems.
[ OK ] Stopped Remount Root and Kernel File Systems.
[ OK ] Stopped Create Static Device Nodes in /dev.
[ OK ] Reached target System Shutdown.
[ OK ] Reached target Late Shutdown Services.
[ OK ] Finished System Reboot.
[ OK ] Reached target System Reboot.
[ 528.507516] reboot: Restarting system
Full log here
6. Waited some more time to make sure it stuck (around 30 minutes while writing this post)
7. virsh destroy hass and virsh start hass, then it boot successfully with OS 11.1. Full log of booting after update here
I have the same issue running KVM. OS 10.5, core 2023.11.3, Kubuntu 22.04.
I didn’t have this issue before on the exact same version, on the exact same machine. It started after doing a clean system install and upgrade from Ubuntu 20.04 to 22.04. The same image was migrated, using an identical .xml for the KVM machine setup.
Ever got this fixed? I am running in KVM as well, and have had this issue from i started, around 4-5 years.
virsh destroy and virsh start is the only option.