That’s fantastic. Do you think it would take much to have this as an official feature in the likes of HAOS, that could be toggled maybe in some advanced/system menu or even via the configuration.yaml?
It feels so close and could help all of us HAOS users if it was somehow added there as a supported option to reduce disk writes.
Essentially it’s really peace of cake to do so, but apparently reducing IO in order to lower the wear Home Assistant is imposing on hardware isn’t really a top priority.
Actually I think all journaling services even could be disabled. Since as far I can see, they aren’t accessible using HA interfaces like the GUI or HA CLI. See for example: https://community.home-assistant.io/t/wth-is-there-no-home-assistant-logs-to-remote-syslog-integration/473949/10. Besides all logging is being done at least twice, because the log files in add-on overview haven’t changed, even after applying these rules (and no logging to syslog).
However, I think keeping things as long as reasonably possible in memory is quite more interesting as it really pays more of > logging is really small potatoes compared to the IO-traffic of the database for example, besides also for logging counts that one sequential write imposes a lot less wear on the hardware than many random writes.
I am surprised that no one suggested using a micro-SD card with wear-leveling, and use the largest SD you can. So if your program only needs 8GB and you use a 128GB Wear-leveling SD card, then you have extended your SD card’s life by 16X.
FWIW, I have a Raspberry Pi that is my MQTT broker has been running 24/7 for the past four years without a hiccup.
Yeah, might be an option for sure. Although, if you read my other thread, you would have seen that Home Assistant is writing about 60GB/day to my SSD which I reduced to 40GB with exclusion of entities from the logger I’m not interested in in the long run. This with a database of about 1GB at the moment, means that a lot of the traffic is being caused by the rewriting / remoddeling of the database.
This amount of IO traffic is also gonna wear out a wear-leveling SD-card. As for my 4 months old SSD I already have a TBW of 7.2TB.
For an 128GB SSD -which have wear leveling by default- the guaranteed TBW is in the range 30-120TB, which would mean I would have expected to see failure within one year up till 5 years from now. For an micro-SD card, these expected TBW’s are usually much less (except for better brands like SanDisk Max Endurance or Samsung Pro Endurance; which have 400-800TBW stated).
Awesome guide! To clarify, if I want to use recorder: {db_url: 'sqlite:///:memory:'} do I need to create an in-memory SQLite database first or will one be created automatically?
It should work automatically, there is nothing to do – “in memory” here does not mean something like a RAM-disk, but literally within the memory of your HA instance. With the documented effect that you lose everything on a restart. See In-Memory Databases for more technical details.