How to reduce your database size and extend the life of your SD card

Yeah this guide is old and doesn’t seem to be quite working anymore. I ran into the issue that the notifier script no longer generates output like the guide shows. It is just a mess :frowning:

For MariaDB, I had to update the query to:

SELECT 
    states_meta.entity_id,
    COUNT(*) AS cnt
FROM states JOIN states_meta
ON states.metadata_id = states_meta.metadata_id
GROUP BY
    states_meta.entity_id
ORDER BY
    COUNT(*) DESC;
2 Likes

In my case, I was able to find the culprit of my rapid DB size bloat, and it was a media_player device responsible for more than 90% of the table entries… I excluded those, and then took a back up, and decided to purge those records from the states table:

DELETE QUICK FROM `states` WHERE metadata_id IN (SELECT metadata_id FROM states_meta WHERE entity_id LIKE 'media_player%')
1 Like

That works with the default SQLite database, too.

1 Like

I’ve put together a similar guide based on this post, including the correct SQL query for grouping and sorting the entities in the “states” table. You can check it out here: https://localsharespace.com/improve-home-assistants-database-performance/

However, I haven’t been able to figure out the query for the “events” table. If anyone knows what we should use for “events”, please let me know and I’ll update my guide. I’m also open to any other suggestions to my guide. Thanks!

3 Likes

Nice. You should drop the yaml configuration for history since thats no longer needed.

I cannot access. What do I need?

Yeah, me too.

Sorry, I think I had some of my security settings set too high. Please try again and see if it works now.

1 Like

Thanks, it works now.

Any change you can update the SQL queries in the fist post (anyone can edit it)?

I get notified about this post so I just had a re-read of the OP.

I was wondering if there might be a way to compare the list of entities produced by the script print_entities_to_file with those in the file notify.entity_log?

That would make it less onerous if you get ‘slack’.

Just a thought, that I haven’t had time to think through properly myself yet.

My 2 cents on “extend the life of your SD card”:

  • You can use eMMC with adapter in place of an SD card: SOLUTION : Hassio HA Database in Memory - #10 by lmagyar1973
  • I’ve made an add-on that runs MariaDB on tmpfs, so it is an in-memory database. See: In-memory MariaDB (MySQL) add-on for recorder/history integration
    • This is a fork of the official add-on!
    • This version uses tmpfs to store MariaDB databases in-memory.
    • Even this is an in-memory database, it can automatically export (from memory to SD card) the database’s content during backup, update, restart or even periodically, and can automatically import (from SD card to memory) the content when the add-on starts again. The database dump is gzip-ed before written to the storage to minimize SD-card wear.
    • Though it won’t protect you from power failures completely. After a power failure, when the add-on is restarted, it will import the last known exported database content. So when eg. daily periodic export (from memory to SD card) is enabled, you will loose the latest sensory data within that day, but your long term statistics information will remain mostly intact:

But first you have to really minimize the amount of data you want to store in the in-memory database!

I store only the necessary entities for graphs/charts, I turned on the regular, daily export (memory->SD) to protect against power failures, and I turned on to delete the old statistics that I don’t want to display. The best is to keep the DB size only a few 10MB.

Is there a faster way of removing errant records from the database (MySQL) - i’ve got a series of sensors that are incredibly chatty (> 3 million records in a week) - removing them via the purge entities service is glacially slow.

Is there a reason you need to keep those 3 million records per week? If my math is correct, that’s five per second. I don’t think the HA database is optimized for that sort of volume. And if you’re using an SD card (as in the title of this thread) then you’re really asking for trouble.

Best practice would be to exclude them. If you need to keep historical data, create templates which sample them less frequently. Keep those and exclude the source data.

1 Like

Hell no, this is the result of reconfiguring a zigbee sensor after which I missed a number of entities in populating recorder exclude.

1 Like
Doing so gives me a black page. And an error in the log:

Logger: homeassistant.components.http.security_filter
Source: components/http/security_filter.py:66
Integration: HTTP (documentation, issues)
First occurred: 15:19:15 (5 occurrences)
Last logged: 15:22:58

Filtered a request with unsafe byte query string: /api/hassio_ingress/4na91HttrXMhiO4gszMnRJBZUmR7gruQGT1Dl1UWCws/states/query/?ordering=&export_ordering=&sql=--+Updated+query+Dec+2023%0D%0ASELECT%0D%0A++COUNT%28*%29+AS+cnt%2C%0D%0A++COUNT%28*%29+*+100+%2F+%28SELECT+COUNT%28*%29+FROM+states%29+AS+cnt_pct%2C%0D%0A++states_meta.entity_id%0D%0AFROM+states%0D%0AINNER+JOIN+states_meta+ON+states.metadata_id%3Dstates_meta.metadata_id%0D%0AGROUP+BY+states_meta.entity_id%0D%0AORDER+BY+cnt+ASC
Filtered a request with unsafe byte query string: /api/hassio_ingress/4na91HttrXMhiO4gszMnRJBZUmR7gruQGT1Dl1UWCws/states/query/?ordering=&export_ordering=&sql=SELECT%0D%0A++COUNT%28*%29+AS+cnt%2C%0D%0A++COUNT%28*%29+*+100+%2F+%28SELECT+COUNT%28*%29+FROM+states%29+AS+cnt_pct%2C%0D%0A++states_meta.entity_id%0D%0AFROM+states%0D%0AINNER+JOIN+states_meta+ON+states.metadata_id%3Dstates_meta.metadata_id%0D%0AGROUP+BY+states_meta.entity_id%0D%0AORDER+BY+cnt+ASC
Filtered a request with unsafe byte query string: /api/hassio_ingress/4na91HttrXMhiO4gszMnRJBZUmR7gruQGT1Dl1UWCws/states/query/?ordering=&export_ordering=&sql=SELECT+%0D%0A++++entity_id%2C%0D%0A++++COUNT%28*%29+AS+cnt%0D%0AFROM+states%0D%0AGROUP+BY%0D%0A++++entity_id%0D%0AORDER+BY%0D%0A++++COUNT%28*%29+DESC%3B

Whats wrong?

I think the database has changed since then. Check out this post.

Same error. Black screen…

ERROR: column "id" does not exist

I’m testing out compression and wondering where the ID column has gone?

Anyone using compression?

Testing with this:

ALTER TABLE ltss SET (
	timescaledb.compress,
	timescaledb.compress_orderby = 'time DESC',
	timescaledb.compress_segmentby = 'entity_id'
);

Changed chunks to 12 hours.