In-memory MariaDB (MySQL) add-on for recorder/history integration

In-memory MariaDB database for Home Assistant.

This is a fork of the official add-on!

This version uses tmpfs to store MariaDB databases in-memory. If you are trying to minimize your SD-card’s wear, give this add-on a try.

Even this is an in-memory database, it can automatically export (from memory to SD card) the database’s content during backup, update, restart or even periodically, and can automatically import (from SD card to memory) the content when the add-on starts again. The database dump is gzip-ed before written to the storage to minimize SD-card wear.

Though it won’t protect you from power failures completely. After a power failure, when the add-on is restarted, it will import the last known exported database content. So when eg. daily periodic export (from memory to SD card) is enabled, you will loose the latest sensory data within that day, but your long term statistics information will remain mostly intact:


Long term statistics from an in-memory database

Note: If you update or restart the add-on, please stop HA core to avoid error messages
that the database is not available (during plain backup, stopping HA core is not
necessary). You can execute in the command line the
/share/bin/update-mariadb-inmemory-while-ha-core-restarted script to do it
automatically.

See the Documentation tab for the required configuration changes for the recorder integration!!!

Link & how to install: GitHub - lmagyar/homeassistant-addon-mariadb-inmemory: In-memory fork of the official Home Assistant MariaDB add-on https://github.com/home-assistant/addons/tree/master/mariadb

3 Likes

Meantime I updated the documentation a little bit, there are better configuration examples, though the add-on binaries are the same, so there is no new version. If somebody started to use it, I suggest you to review some settings.

I’m using it in the past month, and it seems to be quite stable.

Does this solution continue to work better than db_url: 'sqlite:///:memory:'
Does your forked add-on need to be updated every month with each home assistant release?
I rather run the db in memory, but I keep losing the data randomly with (sqlite3.OperationalError)

Issues

Yeah, the random sqlite3.OperationalError was the main and only reason I’ve created this add-on. I’m using it on 2 HA installs, without any issue, it simply works. So yeah, it works better, much better. :sunglasses:

The only issue I run into was setting too small memory limit (the tmpfs/size option in the config) and running out of memory. I prefer to set a reasonable limit, this forces me to select which entities and events I want to store in the DB. (And I don’t know how the HA OS uses memory, I suppose using too much memory will result in swapping, and using SD card again UPDATE: AFAIK it won’t, it uses a zip-ed inmemory filesystem for swap, so I suppose too much memory usage will result in high CPU load.)

Updates

I release an update when the original add-on changes (I merge the changes from it), or when the DB schema changes in HA (I had to add TRANSACTIONAL=0 to each table to NOT use to much memory)

But you usually don’t need to update. The changes from the original repo usually are not used in an in-memory DB. The schema changes from HA usually are not critical, HA will update the schema without any issue (adding/modifying columns, etc.)

The only time you should update the add-on is when a new table added to the DB, like the statistics table recently, when the in-memory version should add the TRANSACTIONAL=0 parameter to the table to save memory.

But before updating the add-on, you can connect to it on port 3306 with eg. HeidiSQL, DBeaver, BeeKeeper-Studio and dump the content of the tables and reload them after restart. How to do it:
- > ha core stop
- from the DB UI: save DB content (no table drop, no table create, only the content, a lof ot insert lines in the generated SQL file)
- > ha ad update 45207088_mariadb
- from the DB UI: load DB content
- > ha core start

### Note

If somebody helps me figuring out how not to use a custom schema, ie. how force mariadb to use TRANSACTIONAL=0 parameter for each table that HA/SQLAlchemy creates in it, even this updates would be unnecessary. I’m new to HA and mariadb, and I coudn’t figure out how to do it.

Thank you for your detailed reply. I am trying to use the add-on. I added it to repositories in Supervisor, but a box is not showing up to install it like other add-ons. Am I missing something?

For some reason, it took the installation box awhile to show up. I refreshed the page, restarted supervisor, but it didn’t show up, but now after waiting awhile the box showed up, and now I can install it.

I got it working! The only suggestion is in the documentation it doesn’t mention to replace "PASSWORD’ in the configuration.yaml with the password that you create when configuring the add-on before starting it for the first time.

Thank you for your feedback! I’ve updated the documentation about the waiting/reload during installation and the password setting in the configuration.yaml.

FYI: I’ve added data retention functionality to this In-memory MariaDB add-on.

The add-on will automatically export the homeassistant all databases’ content during backup, update or restart and will automatically import the content when the add-on starts again.

After a power failure, when the add-on is restarted, it will import the last known exported database content.

So in case of normal HA tinkering (restarts, updates, etc.) it will behave like a real database, you will hardly notice any difference. In case of power failure, at least the long-term statistics will be more-or-less saved.

This functionality is turned on by default, but can be disabled in the config.

I plan to add the possibility of some regular data dump, so even without regular backups, there will be regular data saving points.

1 Like

Hello, thanks for the add-on, I’m trying to setup it. I have this line in config:
automation: !include automations.yaml

And I have an error after adding your config:
duplicated mapping key at line 32, column -196: automation:

Also, should I remove default_config: to get your recorder: configuration work? Should I add all of the integrations in Default Config - Home Assistant manually then? Thank you

P.S. Looks like your recorder: config works with default_config:, only question with automation left, I just removed that automation section to test, but want to add it back, but there is an error I show early.

I don’t think default_config causes the error.

If you separate things out from the main configuration.yaml with lines like automation: !include automations.yaml, your automations.yaml should not contain the automation: line, so it should contain only this:

  - alias: Auto purge with repack
    trigger:
      platform: time
      at: "04:12:00"
    action:
      service: recorder.purge
      data:
        keep_days: 7
        repack: true

Similarly, if you have a recorder: !include recorder.yaml line in configuration.yaml, the separated recorder.yaml should contain only something like:

db_url: mysql://homeassistant:PASSWORD@45207088-mariadb/homeassistant?charset=utf8mb4
auto_purge: false
exclude:
  event_types:
    - call_service
include:
  entities:
    - <the entity ids you really need>

I made these copy-paste errors several times. :slight_smile: For more details see: Splitting up the configuration - Home Assistant

1 Like

Hello Laszlo (and other readers),
In-memory MariaDB looks like a great add-on for HA on SD cards!

Overall my question is: as a new homeassistant-addon-mariadb-inmemory user, how can I demonstrate to myself that In-memory MariaDB is in fact working and that the database is using RAM rather than SD card?

More detail about my question:
I’m a HA user since 2017 and run my own HA in a venv but I’m building some HA systems to send across country to my sons to get them up to speed with HA quickly. I’m building them on RPi 4’s with:

  • Home Assistant 2022.11.1
  • Supervisor 2022.10.2
  • (hassio) Operating System 9.3
  • using SD card

and to maximize the time until the SD card fails, I’m trying out In-memory MariaDB. From the description it seems like your add-on is just what I need. Great!

But as a newbie to both MariaDB and In-memory MariaDB, I wonder how to know that it’s doing what I think I want it for: reducing SD card writes to a bare minimum…how do I know that I haven’t mis-configured it so much that it’s doing nothing for me?

Testing I’ve done so far is to:

  • install Hassio on the several RPi4s 1GB
  • install In-memory MariaDB per github docs with password changed but all other config at default
  • verified in the add-on’s log that “s6-rc: info: service mariadb-core-post successfully started” and the like say successfully started
  • ensured in configuration.yaml that recorder is configured to exclude only sensor.date and that purge_keep-days is 1 day
  • made an automation that increments a counter once a second

and I’m going to let it run for several days.

After 8 hours or so I do see that the history of

  • sensor.disk_use_percent_config is staying constant, and
  • sensor.memory_use is slowly increasing.
    [edit] But does that prove that I’ve correctly set In-memory MariaDB to minimize SD writes?

So, to restate the question: in addition to what I’ve just described is there a more straight-forward way to show that In-memory MariaDB is doing what I think I want it for: reducing SD card writes to a bare minimum.

Joe

Hi Joe!

I’ve configured the add-on to have a tmpfs (ie. in-memory) file-system at /tmp, and configured MariaDB to store it’s database at /tmp/databases. So I’m quite sure it uses memory instead of the SD card. :wink:

If you have HA CLI access, use these commands:

docker exec -it addon_45207088_mariadb /bin/bash
df

I’ve used a separate tmpfs for the /tmp folder, to be able to limit the max. memory usable for the database. As far as I was able to understand, HA OS doesn’t use the SD card for virtual memory, but uses a compressed in-memory filesystem also (I don’t remember the details now), and I didn’t want to let the database to eat up all the RAM and CPU from the system.

Things to consider:

  • recorder likes to record a lot of stuff, so I like to use an “opt in” approach instead of the default “opt out”, this can lead to complaints by new users why their graph isn’t shown
  • memory exports to SD
    • by default the add-on exports it’s database from memory to the SD card when it is backed up by HA, or when it is stopped gracefully (add-on restart or update, host reboot), the content is compressed, so in my case 150MB is the max size of the database, database files size 40-60MB (it needs some extra space in /tmp/databases to repack it’s files after old data is deleted from the database), the compressed export is less then 10MB, so 10MB for each backup I think is OK
    • by default it is not turned on to export the memory content to SD card periodically, but I use it, 10MB/day is nothing, HA writes much more other stuff to the SD card daily
    • periodic database export to SD card is not for the latest sensory data to be shown on graphs, they will be lost after a power failure, but for the long term statistics, long term graphs can have much more info then the latest sensory data, I like them
  • statistics data growing continuously, and I had no time to read the docs how to limit it’s timespan (if it is possible), eg. I purge my database daily, but I still have 1 year old statistics in the database, it is on my todo list…
  • the latest 2.5.1.6 versions seems to be quite stable, it survives power outages also (the last exported content will be re-imported), this caused some problem lately, HA doesn’t wait for the full startup of the add-ons, and started to use it before it is imported it’s data causing errors
  • the only problem I see now (beyond the growing statistics data size) is that updating the add-on is better done manually from the CLI, because it is better to stop HA before we rug the database from under it, this is not the best experience for new users, but I don’t see a better approach now

Oh, and I use an eMMC-SD adapter and I’ve put eMMC memory into the SD card slot on my PIs, AFAIK eMMC is much faster and has much better wear leveling than a plain SD card.

Hi Lazlo, thanks for the well thought out, informative reply!

Now that you explain what to look for, it’s very plain that it’s there! Here’s mine as of a few minutes ago showing your /tmp in a tmpfs, and I’m still using your default “size: 200m” so this size shown seems right:

tmpfs 204800 3980 200820 2% /tmp

and in addition to watching sensor.memory_use slowly increasing within HA, I’ll also monitor it from the CLI.

so I’ll change my settings:

enabled: true
periodic: disabled

from disabled to hourly. What I think I’m expecting is that /tmp usage will increase continuously with SD usage staying steady until once an hour the /tmp usage will decrease and SD usage will increase. (The whole purpose of this test setup that I’m writing about here is to demonstrate to myself that I’ve correctly configured In-memory MariaDB to minimize SD writes)

Things to consider about your things to consider: :grinning_face_with_smiling_eyes:

Agreed. The purpose of this exercise is to show me that I’ve got your software working, so now that I’m sure I have basic functionality installed and I have only a few recorder exclusions I will add a lot more sensors and automations for it to record to really demonstrate the growth of /tmp versus SD.

I’ve changed to hourly in this test setup so as to demonstrate database being stored in RAM with periodic export to SD card.
[edit] I changed it back to daily so I can see a larger change inside memory before it’s exported to SD card.

What you’re saying here is that latest sensory in RAM is intended, for example, the temperature over the past 10 minutes, but long term data exported to SD is intended, for example, the temperature over the past 10 months. Right?

I will keep watch for how to do that, although I have “Auto update” toggled ON right now

And as asides, THANK YOU for the method to change to the bash shell and for info about eMMC-SD adapters. I didn’t realize that bash was easily available in HAOS, and didn’t know those adapters existed (I use separate SSD on my own HA instance).

More or less I agree with you, I answer only where I think we misunderstand each other. :slight_smile:

What I think I’m expecting is that /tmp usage will increase continuously with SD usage staying steady until once an hour the /tmp usage will decrease and SD usage will increase.

Nope. The database in the inmemory /tmp location is always there in the memory, in it’s entirety, when it is written to SD (/data folder of the add-on’s docker container), that is an mysqldump export, it is like a backup, everything remains in inmemory /tmp AND you get a “copy” of it in SD /data, a snapshot on the SD card (I use mysqldump, it is like a plain old database backup, but I intentionally do not use this word, because backup is also a function in HA, I call it export).

This /tmp → /data export can happen “on certain occasions” and periodically, the export is the same. In case of HA backup (that is one of the “on certain occasions”), this exported content in /data is part of the HA backup file. On all occasions (periodically included), the exported database content in /data is reimported into /tmp when the add-on is started (because there is nothing in /tmp when it is started, when it is stopped, /tmp is gone).

Yeah, under normal circumstances this in-memory database behaves like a real one, you don’t lose data, only a power outage will cause data loss, because any change in the inmemory /tmp database after the last export to SD, will be lost. But this is a trade-off.

I have only a few recorder exclusions

I prefer only INclusions in case of entity states. My states tables are huge even when everything is opt-in in my case. I use EXclusion only in case of events, they are much smaller, not a big problem, in my case states require 100 times more database space. :confused:

I’ve changed to hourly [periodic export]

Yeah, in case of a small database, it is not an issue, but if the in-memory db add-on backup’s size starts to increase to several MB-s, it is a bigger wear for the SD-card. It is a decision, another trade-off.

What you’re saying here is that latest sensory in RAM is intended, for example, the temperature over the past 10 minutes, but long term data exported to SD is intended, for example, the temperature over the past 10 months. Right?

Not completely right. The “latest” AND “long term” data are in the RAM, there is the real database. It is regularly exported to SD altogether. So, in case of an hourly export, after let say 50 minutes, the “latest” 50 minutes is only in the RAM, and everything as a snapshot at 50 minutes ago is on the SD also. So in case of hourly export, you can lose only 0-60 minutes of data, that is only in RAM, anything else is both in RAM and SD.

“Auto update” toggled ON right now

Spending nearly 4 decades around IT, I still do not have the courage to turn on anything auto on “critical” stuff. :smiley: My heating is critical. :smiley:

bash, eMMC

Yeah, bash, cli, ssh to HA or even ssh to the host OS, is a bit hidden in the docs, they don’t want absolutely inexperienced new users to ruin everything.

The main problem with eMMC (on SD adapter), that they are expensive, my 32GB eMMC even ordered from “far, big, eastern” country :stuck_out_tongue: cost nearly as much as my second-hand rPI 3-s. But if I have to travel 200+km to recover a failed SD, that costs even more for me. This is another trade-off.

Again, thanks so much for your gracious and well-expressed reply.
I’ve been running test RPi systems here for a few days now, and here’s some results.

THE MAIN THING THAT I’VE LEARNED FROM YOU
To quote myself:

And to gather and summarize how I understand your answers:
On a new install’s (i.e. a test install’s) CLI run the commands:

docker exec -it addon_45207088_mariadb /bin/bash
df

and verify that there is no reference to any tmpfs mounted as /tmp

Then install, configure, and start In-memory MariaDB per github docs and run the command
df

and expect to see an item like:

Filesystem           1K-blocks      Used Available Use% Mounted on
...
tmpfs                   204800      3976    200824   2% /tmp

and maybe see the second number, “Used”, increase at first as history is first being stored.

SOME OTHER THINGS THAT I’VE ALSO LEARNED FROM YOU:

  1. Don’t auto update software (how quickly you convinced me!)
  2. The sequence from Recorder->History in RAM then RAM → DiskDrive isn’t what I thought it was (so I’m going to study HA docs about their inter-workings - that’s not a subject for this thread)
  3. Notably, The “latest” AND “long term” data are in the RAM, there is the real database. It is regularly exported to SD altogether
  4. In HAOS it’s possible to use bash and other features that are not widely documented - also not a subject for this thread
  5. There is such a thing as a eMMC (on SD adapter) to replace SDs without requiring a separate SSD
  6. Once my testing is finished and I start building my sons’ actual HAs, I will Include a minimum of History

The two HA instances that I’ll end up sending to my sons will be in two corners of the continental USA while I’m in a third corner, so I especially appreciate your help! It would be phone calls for me rather than 200+km travels, but I still want to minimize service calls!!

Only one point to clarify: docker exec -it addon_45207088_mariadb /bin/bash will show the internal filesystem of the already installed and running inmemory add-on’s docker container, with df you see that /tmp is a separate tmpfs with configurable limited max. capacity, and the HA database is at /tmp/databases/homeassistant. In this add-on, tmpfs is not simply enabled (this is a parameter of the add-on, whether it wants a tmpfs mounted at /tmp), but I also limit the size of it.

And note, when the database from the RAM is exported to SD, this is compressed on the fly to minimize SD wear as much as possible.

Oh, the recorder integration (a component of HA, that stores the states into a DB) has a parameter commit_interval, the only problem with it, that if you increase it to eg. 1 minute, you won’t be able to read the data in the commit “cache”, ie. you wont be able to see the last 1 minute on the charts. This was the reason I’ve started this add-on, maybe it is changed in the past year(s), but I don’t think so.

And some off topic: add ssh access to the system (use the community add-on). And don’t forget to enable port forwarding on the routers. :wink:

PA/FYI: New function

Periodically purge old statistics values. Enabled by default (purge older than 6 months).

Homeassistant never deletes old statistics data, so the database size can overgrow the memory limit.

Optionally it can archive the old statistics values (export it with mysqldump to /share folder before deleting it). Because why not.

So currently it can behave more or less identical with a true database, the only situation you can lose some data, if there is a power outage. In this case the periodic data export function is useful, you will lose only the recent data after this periodic export.

I use daily periodic export, and monthly statistics purge with archive. So I can lose the past day’s sensory data, but I still can draw nice half year long statistics graphs (even after a power outage).

Hi, I’m running a system where MariaDB is already installed and available. I would like to create a DB for HA and integrate it with this extension, to have in-memory functionality and reduce writes to SD card.
Is it possible to do what I mean?

If you want to import a “legacy” MariaDB into this add-on:

  • stop HA core
  • sqldump the old homeassistant database
  • login into this add-on’s command line with docker exec -it addon_45207088_mariadb /bin/bash
  • import the dump
  • modify HA configuration.yaml (see the add-on’s documentation)
  • start HA

If you want to use this add-on next to a HA Container installation (see: Installation - Home Assistant), it is not possible. Even add-ons are docker images, but they require a few stuff around them to run (HA and Supervisor APIs, managed configuration options, backup/restore system, etc.), so add-ons can’t be used if you installed HA Container.

1 Like