How to keep your recorder database size under control

Indeed, i used just one of those kind of sensors. Rest is excluded.

Did not know, i thought it would not be able to calculate energy data etc. Or trigger automations when using ‘is off for x minutes’…

Do you mean the one from the Blueprint / automation?

Thanks!

  1. Install hdparm
  2. Read the NVME/ SMART statistics for your SSD.
  3. I would be very surprised if your SSD is not about to die. Look for something like wear levelling remaining (effectively lifetime remaining) or reserve blocks used (internal SSD reserve blocks that are used to replace blocks that have died).

Sorry, what I meant is you might lose functionality but the core of HA (being able to control your home will still work). For sure, Energy Dashboard will not work without recorder because that relies on long term statistics data (which in turn relies on recorder).

I mean this one:

1 Like

is the ‘logbook’ with a lot of useless info, also kept in the recorder/database file?

After lots of trying i came up with the following settings:

#
# https://www.home-assistant.io/integrations/recorder/
#
recorder:
  purge_keep_days: 3
  auto_purge: false
  db_url: sqlite:////config/home-assistant_v2.db
  exclude:
    domains:
      - device_tracker
      - media_player
      - uptime
      - time_date
      - worldclock
    entity_globs:
      - sensor.clock*
      - sensor.date*
      - sensor.time*
      - sensor.uptime*
      - weather.forecast*
      - sensor.afvalinfo*
    entities:
      - sensor.electricity_meter_tijdstip
      - sensor.home_assistant_v2_db_laatst_bijgewerkt
      - sensor.system_monitor_disk_free
      - sensor.system_monitor_disk_use
      - sensor.system_monitor_last_boot
      - sensor.system_monitor_load_1m
      - sensor.system_monitor_load_5m
      - sensor.system_monitor_load_15m
      - sensor.system_monitor_memory_free
      - sensor.system_monitor_memory_use
      - sensor.system_monitor_network_in_end0
      - sensor.system_monitor_network_out_end0
      - sensor.system_monitor_network_throughput_in_end0
      - sensor.system_monitor_network_throughput_out_end0
      - sensor.system_monitor_packets_in_end0
      - sensor.system_monitor_packets_out_end0
      - sensor.system_monitor_swap_free
      - sensor.system_monitor_swap_use
      - sensor.aeotec_z_stick_gen7_average_background_rssi_channel_1
      - sensor.aeotec_z_stick_gen7_average_background_rssi_channel_2
      - sensor.aeotec_z_stick_gen7_average_background_rssi_channel_3

I tried a lot of things, the first wast excluding things from the recorder, my DB decreased a lot in size, but the problem was that it still keeps growing again, even after excluding the size added up and up to the same size of my starting point again.

The biggest states where from the system monitor sensors and energy sensors.
As suggested by @illuzn i started to use the automation to keep energy data just for one day. This seems to bring no problems in the graphs and lowers the DB size a lot.

Also, why do we need the default 10 days of data? I put it back to 3 days and this lowered the size a lot again.

I added 2 more automations i found in the old home assistant configucation from Frenck:

id: "system_recorder_repack"
alias: "System - Recorder Repack"
description: >-
  Sunday morning, early morning, repack the recorder database.
  This keeps the disk usage low and nice by reclaiming space.

trigger:
  - platform: time
    at: "05:55"

condition:
  - condition: time
    weekday:
      - sun

action:
  - service: recorder.purge
    data:
      repack: true
      apply_filter: true

mode: single

id: "system_recorder_purge"
alias: "System - Recorder Purge"
description: >-
  Purge my recorder database every hour, to keep things nice and tight.
  While Home Assistant does this daily, I like to do it every hour.

trigger:
  - platform: time_pattern
    minutes: 12

action:
  - service: recorder.purge
    data:
      repack: false
      apply_filter: false

mode: single

For now my DB keeps around the 65MB in size, will have to see what it does after a week from now.

PS. Sorry for the bad grammer sometimes.

If anyone has suggestions or improvements, please let me know!
Thanks for the help and info on this page :slight_smile:

Edit: Note the historical graph from the database sensor, some of my energy sensors do have this aswell, but also a lot do not have any historical data in it, just the last days. Any idea why some sensors have historical data ans some not?

It just depends on how often you think you will need to the second data. It’s kind of handy for bug checking and the like to know what is causing potential issues. Personally, I keep 1 month because I’m not watching HA every day and if I spot an issue 2 weeks after it occurs it’s good to know what’s going on.

Be aware, this causes a lot of unnecessary disk writes and may wear out your ssd/ hdd/ sdcard even quicker. I do this once per day, and realistically, a 200mb database is not a big deal (mine is around 600mb with 30 days data).

Sensors must:

  • be recorded by the recorder
  • provide a valid “state_class”

in order for statistics to be provided. I suspect some of your sensors are not properly providing a state_class. You can use a “customize” setting in your configuration.yaml to fix this.

Thats true, however if i not check HA i do not update anything of change anything so I hope there are no new bugs happening.
I am still figuring out what time to keep the data…

Thanks, I thought that only repack would causes the write wear of the disk.
than i better set it back to default, thats one time a day.
Also maybe I should start accepting some 100+ MB for a database :stuck_out_tongue:
Thought it would be slow with a big database and backups do get large.

For energy sensors I asked here:
How to make a sensor keep LTS and last period LTS - Configuration - Home Assistant Community (home-assistant.io)

template:
  - sensor:
      - name: "Daily Energy"
        unit_of_measurement: "kWh"
        device_class: energy
        state_class: total_increasing
        state: "{{ states('sensor.daily_energy_offpeak')|float + states('sensor.daily_energy_peak')|float }}"
        availability: "{{ has_value('sensor.daily_energy_offpeak') and has_value('sensor.daily_energy_peak') }}"

This is working :slight_smile:

I did change your automation to purge entitieglobs, now it’s not doing anything.

id: "system_recorder_purge_entities"
alias: "System - Recorder Purge Entities"
description: >-
  Cleans up sensors that log too much but want data of.

trigger:
  - platform: time
    at: "04:44"

action:
  - service: recorder.purge_entities
    data:
      domains: []
      entity_globs: >-
        sensor.daily* sensor.yesterday* sensor.week* sensor.weekly*
        sensor.last_week* sensor.month* sensor.monthly* sensor.last_month*
        sensor.year* sensor.yearly* sensor.last_year* sensor.electricity*
        sensor.energy* sensor.system_monitor* sensor.last_week*
      keep_days: 1

mode: single

Do i really need to add all the entities manual as a sensor? :open_mouth:

Edit: it does not purge any of the entity_globs, only wat is in

    target:
      entity_id:
        - sensor.daily_energy

Opened a DB file in DB Browser for SQLite.
Is it normal that the “states” table contains “NULL” in “entity_id” column?

Yes, this is normal behavior. Since a while the entity_id is stored in the states_meta table and is referenced by the metadata_id (should be the last column an a row). This saves a lot of space since it is only stored in a two byte number and not in a text string that is about factor 5-20 larger.

Thanks a lot for the reply.
Here is why I asked this question:

Due to some issues with processing jinja templates in a custom card I observed this chain reaction:
huge CPU load → rapidly increasing DB
The DB size jumped from ~1GB to ~8GB within a few hours.

Why these templates & card caused this huge CPU load - is another issue.
Why this huge CPU load caused a rapid constant flooding the DB - is also another issue (have no idea, may be some integration, stock or custom, reacted this abnormal way).

Called “purge+repack” service, it reduced DB size to ~7GB.
This is still abnormal. So I am trying to:
– find out why the DB size was increased,
– find out why DB size was not reduced by “purge+repack” service (issue in Github),
– find out how to reduce a card by myself - if a “standard purge service” failed.

What I did:

  1. Opened db file in a DB Browser for SQLite.
  2. Ran a query posted here.
  3. It gave me a table which I pasted into Excel and sorted by “bytes” column:
    image
    where the 1st row is a calculated sum (MB) - i.e. all entities occupy ~49MB in DB.

I believed that the “bytes” column stands for a “size of stored data” since OP did not care to add any descriptions in his 1st post.

There is a difference between 49MB & 7GB, so I wonder what could occupy all this space.

For me this script is very useful as it focuses on the size of the entities in the states table in database rather than the number of entries. However as it gives me the same error as others have seen, I wonder if anyone has managed to fix it up? I see there are fixes for the scripts that count the number of entries in the states and stats table, but my SQL is not good enough to do this for this one :slight_smile:

I am also interested to know how can I get a size of data stored for events?
When python get crazy in my setup - I see tons of error messages in Log.
DB grows then very fast.
And I think than DB grows due to data stored for events:


What are these records for?
How can I get a size of these data?

Fixed:

SELECT
  COUNT(state_id) AS cnt,
  COUNT(state_id) * 100 / (
    SELECT
      COUNT(state_id)
    FROM
      states
  ) AS cnt_pct,
  SUM(
    LENGTH(state_attributes.shared_attrs)
  ) AS bytes,
  SUM(
    LENGTH(state_attributes.shared_attrs)
  ) * 100 / (
    SELECT
      SUM(
        LENGTH(state_attributes.shared_attrs)
      )
    FROM
      states
      JOIN state_attributes ON states.attributes_id = state_attributes.attributes_id
  ) AS bytes_pct,
  states_meta.entity_id
FROM
  states
LEFT JOIN state_attributes ON states.attributes_id = state_attributes.attributes_id
LEFT JOIN states_meta ON states.metadata_id = states_meta.metadata_id
GROUP BY
  states.metadata_id, states_meta.entity_id
ORDER BY
  cnt DESC;

(Tested on Postgres.)

2 Likes

Thank you for posting that! I’d been meaning to get around to something like that myself.

I learned something very interesting from that SQL statement. There’s one event which is taking up 36% of this table, according to the output of this statement. It’s binary_sensor.internet_connection.

Huh? I figured a binary sensor really only needs one bit! But no. When I look at it in Developer Tools / States, it shows a bunch of attributes. So “binary sensor” is really a misnomer. It can pack a lot of attribute data.

I also assumed that this sensor state should change only very rarely. My ISP is normally connected, sometimes for weeks on end. But with each ping, a bunch of “round trip time” attributes change. So this one state change is creating 28% of the state attribute change records in my database!

If this is working the way I think it is, I need to look beyond just how many times a state changes, but also how many attributes are attached to it, and how often they change.

For me, this opens up a whole new dimension to Recorder database management. I figured I mention this and see what everyone else is experiencing.

1 Like

You’re welcome!

I also picked up on the ping sensors. I still need to work through everything else, but I also want to find the sensors with lots of attributes that also cause a lot of updates.

Yeah, the ping integration keeps all round trip time attribute changes in the database, and there’s no configuration option to change that. In my configuration, I create an equivalent template binary_sensor that only mirrors the ping binary_sensor value and does not include the ever changing RTT attributes. Then I exclude the ping binary_sensor from recorder and include the template binary_sensor, so I still have a history of when the ping fails.

In the past, all media_player attributes were also recorded in the database. This was problematic because it included every media position update and made the database huge if you were eg. streaming media all day. There is a mechanism now for integrations to exclude specific attributes from recorder, but this must be implemented in the integration code itself.

It’s a bit annoying that the attribute history can’t be shown via the default history graph card or via the History page even though it’s stored in the database, though luckily mini-graph-card supports showing attributes.

2 Likes

Thanks again, I already had making a template for just the state, and excluding the source entity, on my schedule for today. You beat me too it! I wish HA had hired a DBA instead of a UI designer.

Update: That worked. After excluding the offending entity and two others from the top-ten list, I did a purge of those entities, then a full purge and repack. Sure enough, the database shrunk by 39.5%, about the amount the SQL results showed those three entities to be using.

It seems crazy to have to create a binary sensor template to mirror a binary sensor, but it works.

1 Like

There’s one more crazy trick that you may wish to consider. If you did want some RTT history, but not at the same frequency as the ping check, then you can set up a template sensor that is triggered by a time pattern that mirrors the relevant attribute. Obviously this will increase the database size, but not at the same rate that it did before.

2 Likes

Alternative way is using a “device_tracker” for PING instead of binary_sensor.
The device_tracker entity does not contain these “trip” attributes.

There are many integrations which create frequently changing attributes, like:

  1. Open Hardware Monitor - creates plenty of useless attributes (issue).
  2. Composite device tracker - creates “last_seen” attribute inside “device_tracker” entity (issue).
  3. Similarly:
  • “last_time_reachable” in ASUSWRT - the attribute was excluded, but DB still stores every 30 sec same state (issue)
  • “updated” in starline - but here it is even worse, the integration consecutively writes records with same state & attributes (issue).
  1. iPhoneDetect platform - writes records with same states & attributes (issue).

Also, here is an issue for ping binary_sensor.

So, a useless writing to DB is not considered as smth bad by many platforms…

1 Like

Yeah, I was quite surprised when I saw all the garbage in the state_attributes table. For example, the speedtest integration has lots of records, each with a long JSON string containing all kinds of static data like class, server_name, server_country, server_id, unit_of_measurement, device_class, icon and friendly_name.

Likewise with climate entities, which record every possible thermostat option and value, well beyond anything I’d care about like the actual and set temperatures.

It would seem that lots and lots of static data is recorded, over and over again, with each state change. Doesn’t seem like very good database design to me.