Many of us run HAOS on a VM on top of the likes of an Unraid server, which may make use of an SSD drive as the root drive for HA. A large concern is the fact that SSDs may not last very long if they are receiving constant writes.
Although its possible to reduce HA disk writes, for example by following this excellent page Steps to reduce Write Cycles and extend SD/SSD life expectancy its not easy for the lay person. Some of the ideas on the page, for example writing logs to a ramdisk via log2ram are not possible with HAOS due to its more locked down nature.
I request a feature that might allow either a single or multiple finer grain options to reduce disk writes. Each of course having a caveat but nonetheless reduce the wear and tear on an SSD. I personally would tick most of the boxes and I’m sure others would. I wouldn’t mind for example losing the permanent log writes to disk and would be more than happy to have say x gb of logs written to a ramdisk, sort of like a rolling cache up to a certain memory limit (which would of course be lost after a HA reboot).
Same for me… I use SSDin my Synology NAS to store recodrer data (MariaDB) and after 11 months my SSD reached 3% of life expectancy. MariaDB is only used for HA. While this disk is also used for other apps installed on NAS, from performance monitor I can see that 99.5% of data written belongs to MariaDB… so to HA.
On the other hand HA as such is installed as VM on ESXi host, also on SSD drive, and there is no issues with this installation whatsoever, even if this datastore is shared with 6 other VMs.
I added another option to reduce disk writes, more general than only log2ram.
Instead of this anything-sync-daemon could be used (and is actually in use in my HAOS using the supervised-installer).