Migrated my Supervised install (On Debian 12) to HAOS - Performance seems significantly worse - am I doing something wrong?

Hey folks,

So due to the recent supervised install deprecation I finally decided to bite the bullet and move my Proxmox VM running Debian 12 and the supervised install to an HAOS VM.

I created this VM using the proxmox community scripts HAOS script. It took sooo long to initially spin up and then also to restore my backup from nabu casa cloud. That fortunately all went well.

However, any sort of navigation and actions I take in the UI are taking much longer than previously. The VMs both have the same resources (2 vCPUs and 4gb RAM) and the same addons, etc. running. Starting the containers on system start feels like it used to take ~30s and now takes like 3-4min :thinking:

Is there anything else that should be noted with HAOS installs? Anything to tweak? Is this a common experience and I should just bump the specs a bit? Curious what others have experienced.

If you run the 2 VM’s at the same time, there is the additional load of the new VM

Do you run with fixed size disks?

I’d have a look at your integrations, I run a VM and it is certainly less than a minute to restart same resources as yours. Might be one of the integrations is eating memory.

The original VM is shut down, and the host has more than enough head room so to speak. Don’t think its due to this :thinking:

I am runnign with fixed disk size and fixed memory as well (no ballooning).

I bumped it to 6gb and it seems to be a bit better, but still feels much less stable / reliable than the supervised install. Things time out much more often than I’d like, stuff like that.

I’ll get one of those htop-like addons and double check the system usage of the integrations. HAOS being a read-only fs has brought with it its own set of frustrations :sweat_smile:

Okay so after going through another few hours of automations, I’m almost at my whits end with this HAOS installation haha. For example, a mmwave motion sensor of mine triggers two matter over thread E27 light bulbs in the hallway. One of the simplest automations you can imagine and has run perfectly on the supervised install for 9+ months now. This evening though, I’d say only about half of the time I walk by is the automation successfully run and then often it somehow only triggers one of the lights. Also the historically rock solid (for me) drigirera Ikea integration doesn’t react to 1/2 of button presses.

Anyway, enough complaining - I installed the glances addon and immediately discovered my problem:

1 Like

Yup, so the proxmox community script enabled write-through caching and ssd emulation on the scsi disk for this VM. I didn’t have either of those enabled previously. Disabled them again and disk performance is back, io_wait is down to basically 0, and HA is feeling so much better!

1 Like

Quick question …for the caching, there are generally several options to choose from. Did you choose none or some other choice (writeback, directsync, unsafe)?

Would it be fair to say that the majority of people who used the community script are operating with these default settings? If so, why are these settings not causing many more people to observe the performance degradation you experienced?

NOTE

I am contemplating a move from Home Assistant Supervisor to HA OS on Proxmox. That’s why I am interested in what you encountered because, although my research hasn’t been exhaustive, I don’t recall seeing other reports of a performance problem caused by these settings. Curious to know what’s different about this particular situation.

Good question. I’m running HAOS on Proxmox installed from community script and both write-though caching and SSD emulation are enabled, but I don’t have the experience mentioned. Home Assistant performs well and installing Glances shows 0 - 0.1% iowait times. So not a universal problem or fix.

2 Likes