In-memory MariaDB (MySQL) add-on for recorder/history integration

Thanks for this add-on!
I am running latest HassOS directly on x86-64 laptop installed on the hard drive.

Add-on version: 2.6.1.2
You are running the latest version of this add-on.
System: Home Assistant OS 11.1 (amd64 / generic-x86-64)
Home Assistant Core: 2023.11.3
Home Assistant Supervisor: 2023.11.3
After 1 night of implementing it, I can hear the hard drive is still spinning and being accessed.

Could I expect the hard drive is spinned down?

I think HA writes the drive quite regularly on it’s own:

  • HA core log file (the home-assistant.log file)
  • each add-on has it’s own log, that goes into the host’s log system, and some of the add-ons are quite chatty, or at least the regular healthcheck causes some logging
  • HA saves it’s state in a lot of file under the .storage folder

So I think it is impossible to have a HA installation that never writes the hard drive or SD or eMMC.

If you configured this in-memory DB add-on to never export it’s in-memory DB to storage (SD card, hard drive), it shouldn’t write the log and the storage at all, so I think it is something else, that writes the drive regularly.

Hello.
Here I tried to do a little comparison using In-memory MariaDB:
https://4pda-to.translate.goog/forum/index.php?showtopic=871505&st=44960&_x_tr_sl=ru&_x_tr_tl=en#entry129852450

Wow! Nice work! As I understand:

  • SQLite: 1GB
  • MariaDB: 420MB
  • In-memory MariaDB: 130MB
    … SD card writes during the 3 day long test periods.

I would like to add:

  • By default In-memory MariaDB do not write anything at 02:00 to the SD card.
  • Though even if it is turned on, with your approx. 100MB database that would be around 30MB (5+10+15MB) during the 3 days, but I don’t see any uptick in the “eMMC written” chart around 02:00.
  • So my asumption is that the default settings means that HA on it’s own has written 130MB to SD card, and MariaDB added 290MB, and SQLite added 870MB wear to the SD card during these 3 days with your setup.

Yeah, HA saves it’s internal state as text files in the /homeassistant/.storage folder as state changes, plus a lot of log is stored by the OS, but I thought it is less than 40MB/day. Let’s say, if we have a 32GB SD card/eMMC, it is written all over approx once a year. Hmmm, it should work, even if I thought it takes more time. Nice new info, thank you!

I did redo the stats with the included commit to archive every day at 2:00 (still the same top link under item 3 two spoilers for comparison)
There are almost no differences at all.

I would also like to say a few words of wishes:

  1. I use the “In-memory MariaDB Memory usage” sensor as a command:
command_line:
  - sensor:
      name: In-memory MariaDB Memory usage
      unique_id: inmemory_mariadb_memory_usage
      command: printf `ssh deoptim@localhost docker exec -i addon_45207088_mariadb /bin/bash -c "df\ -h\ /tmp" | awk 'NR == 2{print $5}' | tr -d '%'` %d
      value_template: "{{ value | round(1) }}"
      scan_interval: 600
      unit_of_measurement: '%'
      device_class: data_size

Since I don’t know if the allocated space of tmpfs (200MB by default) for the database is running out or not. It would be great to display this kind of statistics separately as entities.
Say here:
Untitled

  1. As I understand, if we are going to make a daily commit to the archive, there are limitations in the size of the database (the larger the database, the more we need to increase the period of the commit to the archive - to make it make any sense).
    In this case it would be good to split the commit file into periods (say weeks) - what do you think?

Yeah! :slight_smile: I see the small eMMC writes at 02:00, approx. 1, 5, 10MB, that is the wear of the in-memory add-on on the SD card! :slight_smile: My assumption was correct, all the other 100++ MB is HA alone. :frowning:

  1. tmpfs usage sensor:

    • It was on my wish list also.
    • The current sensors (CPU percent, etc.) are provided by the supervisor and hard-wired, can’t be extended.
    • Hacky solution is to use the REST API of HA to periodically inject sensor states. This is what I implemented, see: Report tmpfs usage diagnostic values as sensors · lmagyar/homeassistant-addon-mariadb-inmemory@37dbe3f · GitHub. I test it for a few days in my beta repo then I will release it. I tried to follow the strange naming of the sensors created by the Supervisor.
    • A better solution would be to create an integration (with devices, entities), giving unique_id for these sensors, making it possible that users can edit their names, icon, etc. But I will not do this. The only argument for this would be, that the REST API doesn’t support unique_id-s, see: HA.core REST API POST unique_id for entities?, but I hope there will be some standard solution in the future.
  2. incremental exports:

    • It is not completely clear, what did you mean by: ‘As I understand, if we are going to make a daily commit to the archive, there are limitations in the size of the database (the larger the database, the more we need to increase the period of the commit to the archive - to make it make any sense). In this case it would be good to split the commit file into periods (say weeks) - what do you think?’
    • The add-on uses sqldump+gzip to save all the databases regularly, and it overwrites the previous dump.
    • So, incremental dumps can decrease the wear on the SD card, but it doesn’t worth the effort. The HA database has 2 main parts:
      • States and Events tables: these are purged regularly, these are usually the bigger part of the DB. But you have only a few days of data in these tables, so the content will be really different when the next dump runs.
      • Statistics tables: HA never deletes any data in them, the add-on deletes (because we have limited storage space), let say after 6 monthes, so we can decrease the SD card wear dumping only - let say - the last day of them, but recovery from 180 dumps in a reliable way would be a nightmare, and we would save let say 1-10MB wear of the SD card per day, meanwhile HA writes 100++MB daily whatever we do. So I won’t implement this.
    • UPDATE: Currently the add-on have to keep 2 consecutive copies of the dump, because it is possible that a dump is not written to the SD card’s memory completely before a power failure, even if the OS reports that it gave the data to the SD card completely, it happened with me due to a faulty capacitor on my eMMC->SD adapter board, so there is an elaborate logic how to handle even these 1+1 sqldump files, I’m sure an incremental version of this would cause more problem than solve.

And the new version (with the sensors) is out. :sunglasses: