More or less I agree with you, I answer only where I think we misunderstand each other.
What I think I’m expecting is that /tmp usage will increase continuously with SD usage staying steady until once an hour the /tmp usage will decrease and SD usage will increase.
Nope. The database in the inmemory /tmp location is always there in the memory, in it’s entirety, when it is written to SD (/data folder of the add-on’s docker container), that is an mysqldump export, it is like a backup, everything remains in inmemory /tmp AND you get a “copy” of it in SD /data, a snapshot on the SD card (I use mysqldump, it is like a plain old database backup, but I intentionally do not use this word, because backup is also a function in HA, I call it export).
This /tmp → /data export can happen “on certain occasions” and periodically, the export is the same. In case of HA backup (that is one of the “on certain occasions”), this exported content in /data is part of the HA backup file. On all occasions (periodically included), the exported database content in /data is reimported into /tmp when the add-on is started (because there is nothing in /tmp when it is started, when it is stopped, /tmp is gone).
Yeah, under normal circumstances this in-memory database behaves like a real one, you don’t lose data, only a power outage will cause data loss, because any change in the inmemory /tmp database after the last export to SD, will be lost. But this is a trade-off.
I have only a few recorder exclusions
I prefer only INclusions in case of entity states. My states tables are huge even when everything is opt-in in my case. I use EXclusion only in case of events, they are much smaller, not a big problem, in my case states require 100 times more database space.
I’ve changed to hourly [periodic export]
Yeah, in case of a small database, it is not an issue, but if the in-memory db add-on backup’s size starts to increase to several MB-s, it is a bigger wear for the SD-card. It is a decision, another trade-off.
What you’re saying here is that latest sensory in RAM is intended, for example, the temperature over the past 10 minutes, but long term data exported to SD is intended, for example, the temperature over the past 10 months. Right?
Not completely right. The “latest” AND “long term” data are in the RAM, there is the real database. It is regularly exported to SD altogether. So, in case of an hourly export, after let say 50 minutes, the “latest” 50 minutes is only in the RAM, and everything as a snapshot at 50 minutes ago is on the SD also. So in case of hourly export, you can lose only 0-60 minutes of data, that is only in RAM, anything else is both in RAM and SD.
“Auto update” toggled ON right now
Spending nearly 4 decades around IT, I still do not have the courage to turn on anything auto on “critical” stuff. My heating is critical.
Yeah, bash, cli, ssh to HA or even ssh to the host OS, is a bit hidden in the docs, they don’t want absolutely inexperienced new users to ruin everything.
The main problem with eMMC (on SD adapter), that they are expensive, my 32GB eMMC even ordered from “far, big, eastern” country cost nearly as much as my second-hand rPI 3-s. But if I have to travel 200+km to recover a failed SD, that costs even more for me. This is another trade-off.