SOLUTION : Hassio HA Database in Memory

OK,

I would like to move the HA database to tmpfs to save writes to the SD card, and I’m not worried about loosing the DB on restart, so as an experiment I looked to see if there were any partitions mounted as tmpfs on the hassio install:

esystem                Size      Used Available Use% Mounted on
overlay                  28.6G      8.5G     18.6G  31% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                   462.0M         0    462.0M   0% /sys/fs/cgroup
shm                      64.0M     12.0K     64.0M   0% /dev/shm
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /share
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /config
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /ssl
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /addons
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /backup
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /data
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /run/audio
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /etc/asound.conf
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /etc/resolv.conf
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /etc/hostname
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /etc/hosts
/dev/mmcblk0p8           28.6G      8.5G     18.6G  31% /etc/pulse/client.conf
tmpfs                   462.0M         0    462.0M   0% /proc/asound
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                   462.0M         0    462.0M   0% /sys/firmware

and then in the recorder config set:

recorder:
   purge_keep_days: 1
   db_url: sqlite:////dev/home-assistant_v2.db

Quick restart and everything is up and running with the response from the Logbook and History pages on the front end more or less instant, cool…

However, if I cd into /dev there is no home-asssitant_v2.db listed…, so where is the file ?

(Clearly I don’t understand how tmpfs works, all I know is that its supposed to reside in volatile memory…)

1 Like

Found a very neat solution to this, can’t believe its not in the docs:

recorder:
  db_url: 'sqlite:///:memory:'

Simple as that, full in memory DB, no more writes to SD card and instant history/logbook pages. Obviously you have to watch how much gets logged using the recorder and the history is wiped out on each restart.

7 Likes

Hi there :slight_smile: I came to this solution because I got scared-into worrying about the longevity of my SD card (I didn’t even realise that was something I had to worry about). Is there a way to verify this is now working?

My system:

Rpi 4
Home Assistant 0.117.6
HassOS 4.16

My config:

# keep recorder in memory to prevent frying SD card
recorder:
  purge_keep_days: 2
  db_url: 'sqlite:///:memory:'

Look at two days worth of data on your history page.

Thanks tom_l,

Okay I was able to confirm that the database was now stored in memory simply by restarting Home Assistant and noticing the history logs were reset and now rolling from the moment of restart.

I will check in 2 days to make sure the database purge is successful.

Thanks

I know this is an old thread, but is there a way to check in real time what is the actual size of the DB in memory? It’s working and I can see the general impact of the DB to memory, but not the actual size of it.

Did you manage to find a way to get your db size in real time?

Using the HA SQL Sensor you can run a query to return the database size, by putting this in your configuration.yaml:

sensor:
  - platform: sql
    queries:
      - name: DB Size
        query: 'SELECT ROUND(page_count * page_size / 1024 / 1024, 1) as size FROM pragma_page_count(), pragma_page_size();'
        column: "size"
        unit_of_measurement: "MB"

Unfortunately I can’t get this to work with my sqlite database that is also running in memory, because I have to provide the memory URL so that the sensor connects to the correct database, and the moment I do that the returned value is wrong. I asked a question to the community, but no reply yet.

1 Like

Hey guys.

Also now running my db in memory with 10 days purge intervall. As stated above, the DB size sensor does not work.

Did you ever got this working?

PS: Is the memory db stable and you dont get memory issues with purge days set?

Thx

There is also this add-on, which runs MariaDB in memory:

This has a few optimizations for running a database in memory. However, you need to specify how much memory is allocated to it.

I am surprised there is not more information available about this. Running the HA database in memory should be a great improvement for anybody, who runs Home Assistant of a SD card and wants a few days of history for graphs and stuff.

FYI: And next to my in-memory DB I use the Rock PI’s eMMC memory with an SD card adapter. If the system is critical, I prefer not to get random SD card failures, even if this is more expensive then an SD card. There are lot of log files, entity states, etc. also written to the SD card. :frowning: I’ve bought these:

In the newest released HA version 2022.05 using the default recorder in-memory database is not supported anymore, because it is apparently not thread-safe:

So it seems SD card users have three options:

Example for a minimal recorder set up:

recorder:
  commit_interval: 60
  # Define entities, which are INCLUDED in short-term logging
  include:
    domains:
      - light
    entity_globs:
      - sensor.myawesomeintegration_*
    entities:
      - sensor.myawesomespecificentity
      - sensor.myawesomespecificentity_2
2 Likes

FYI: I’ve added data retention functionality to the above mentioned In-memory MariaDB add-on.

The add-on will automatically export the homeassistant database’s content during backup, update or restart and will automatically import the content when the add-on starts again.

After a power failure, when the add-on is restarted, it will import the last known exported database content.

So in case of normal HA tinkering (restarts, updates, etc.) it will behave like a real database, you will hardly notice any difference. In case of power failure, at least the long-term statistics will be more-or-less saved.

This functionality is turned on by default, but can be disabled in the config.

I plan to add the possibility of some regular data dump, so even without regular backups, there will be regular data saving points.

1 Like