In-memory MariaDB (MySQL) add-on for recorder/history integration

Tags: #<Tag:0x00007fc4105b9448> #<Tag:0x00007fc4105b9240>

If you are trying to minimize your SD-card’s wear by using the built-in SQLite with db_url: 'sqlite:///:memory:' recorder configuration but fed up with the uncountable [homeassistant.components.recorder.util] Error executing query: (sqlite3.OperationalError) cannot commit - no transaction is active errors in your HA log, give this add-on a try.

It will also protect you from the data loss caused by HA core restarts when in-memory SQLite used. Though it won’t protect you from power failures, add-on or host restarts or updates.

This version uses tmpfs to store MariaDB databases in-memory. The default InnoDB storage engine is replaced with Aria storage engine (because InnoDB storage engine wastes a great amount of disk space, and Memory storage engine can’t handle TEXT columns).

Link & how to install: GitHub - lmagyar/homeassistant-addon-mariadb-inmemory: In-memory fork of the official Home Assistant MariaDB add-on

1 Like

Meantime I updated the documentation a little bit, there are better configuration examples, though the add-on binaries are the same, so there is no new version. If somebody started to use it, I suggest you to review some settings.

I’m using it in the past month, and it seems to be quite stable.

Does this solution continue to work better than db_url: 'sqlite:///:memory:'
Does your forked add-on need to be updated every month with each home assistant release?
I rather run the db in memory, but I keep losing the data randomly with (sqlite3.OperationalError)


Yeah, the random sqlite3.OperationalError was the main and only reason I’ve created this add-on. I’m using it on 2 HA installs, without any issue, it simply works. So yeah, it works better, much better. :sunglasses:

The only issue I run into was setting too small memory limit (the tmpfs/size option in the config) and running out of memory. I prefer to set a reasonable limit, this forces me to select which entities and events I want to store in the DB. (And I don’t know how the HA OS uses memory, I suppose using too much memory will result in swapping, and using SD card again.)


I release an update when the original add-on changes (I merge the changes from it), or when the DB schema changes in HA (I had to add TRANSACTIONAL=0 to each table to NOT use to much memory).

But you usually don’t need to update. The changes from the original repo usually are not used in an in-memory DB. The schema changes from HA usually are not critical, HA will update the schema without any issue (adding/modifying columns, etc.)

The only time you should update the add-on is when a new table added to the DB, like the statistics table recently, when the in-memory version should add the TRANSACTIONAL=0 parameter to the table to save memory.

But before updating the add-on, you can connect to it on port 3306 with eg. HeidiSQL, DBeaver, BeeKeeper-Studio and dump the content of the tables and reload them after restart. How to do it:

  • > ha core stop
  • from the DB UI: save DB content (no table drop, no table create, only the content, a lof ot insert lines in the generated SQL file)
  • > ha ad update 45207088_mariadb
  • from the DB UI: load DB content
  • > ha core start


If somebody helps me figuring out how not to use a custom schema, ie. how force mariadb to use TRANSACTIONAL=0 parameter for each table that HA/SQLAlchemy creates in it, even this updates would be unnecessary. I’m new to HA and mariadb, and I coudn’t figure out how to do it.