How large is your database (Database Optimization)?

We are looking to improve database performance. Since Home Assistant does not collect data, we do not know how large the average database size is. Please complete the poll and post the size of your database below.

For SQLite:

Using a shell:

# ls -lh /config/home-assistant_v2.db 
-rwxr-xr-x    1 root     root      244.7M Mar 25 16:55 /config/home-assistant_v2.db

For MySQL/MariaDB

Using phpMyAdmin: Click on the homeassistant database and look for the Sum of the size column

Using the CLI:

SELECT table_schema "DB Name",
        ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" 
FROM information_schema.tables 
GROUP BY table_schema; 
  • 0-1 GiB
  • 1-2 GiB
  • 2-4 GiB
  • 4-8 GiB
  • 8-16 GiB
  • 16-32 GiB
  • 32-64 GiB
  • More than 64 GiB

0 voters

1 Like

Might want to add a separate poll regarding how many days you store. I store 30, so my database is huge. I would assume many people store less than 30 days.

ATM my DB is only 1.6 GB (with 10 days of data, for some tests). I started a new Maria DB about 2 weeks ago after some issues with the old. The old DB was +/- 20GB with 20 days of data.

Thank you for the feedback. If you have tried the 0.112 beta, please post how long it took to upgrade your database schema.

It took less than a minute. A mariadb on another server, I keep only 2 days. (That is my test-HA)

MariaDB Hassio addon on the same machine (Celeron J4005 NUC) with 7 days recording for 500+ entities (recording is enabled for roughly a quarter of all HA entities) varies between 3.5 and 4 GB. Largest part of the entities for which recording was enabled is made of SNMP and MQTT sensors.

Still on 0.111.4

I keep 30 days of history and it is worth ~36GB of data in MariaDB on dedicated SSD volume of Synology NAS… setup not for everyone. Quite a lot of sensors excluded from recorder, because generating frequently changing data (every second).
If I can suggest something it would be ability to configure recorder history per entity… There are entities I’d like to keep history for extended period of time, but for some it is worthless, day or 3 would be more than sufficient… but it is only possible to exclude them entirely or keep history for as long as for others… So some sort of storage ‘tiers’ would be nice…

4 Likes

About 600mb for about 200 entities with 7day keep. I use influxdb for long term storage… about 1.5gb for 1 year so far (infinite keep) and that includes all entities (probably over 500). I think influxdb is better suited for large HA records. Honestly I only use the Mariadb for history charts in lovelace (1 to 3 day charts), and to handle reboots. So I could purge at 4 day and be fine, but 600mb isn’t much either way for an ssd.

Do beware that a lot of db size issues are caused by high reporting rates found in many power sensors (like those zwave outlets that also read current). I had to get rid of some of those I was using for that reason; stupid things would report like 10times per second even when not in use (I never succeeded using zwave commands to change that either). I replaced them with esphome devices, which are very easy to control as far as report frequency.

2 Likes

On my production server (SQLite, still on 0.111.4) the db is 716340 KB, 2 days of data. (Stuff I want to keep is in influxdb)

home-assistant_v2.db => 253 MB (265,961,472 bytes), 34 entities with 30 day keep.

If you get a chance, please keep track of your database size before updating to 0.112 and then check again 11 days later to compare.

If you have a large database > 16GiB, be aware that it may take a while for the 0.112 update to complete the database schema changes.

Does this apply to mariadb, or just sqlite?

All database engines should see a noticeable benefit in performance and size reduction in 0.112

1 Like

Will it make unnecessary the logbook cache https://github.com/amelchio/logbook_cache?

logbook_cache shouldn’t be needed anymore, however it is going to depend on your disk performance and other various system attributes so YMMV.

There are more logbook performance improvements coming in 0.113 (twice as fast as 0.112 or better if you have a lot of sensor data)

1 Like

Awesome! :grin:

The database components need to be rewritten to store data as a relational database, instead of using the database as essentially a flat file storing JSON strings. :slight_smile: That in itself will result in a 100x decrease of database size and about a 10,000x increase in performance.

When that is done, all issues with the database go away. No longer does a user need to worry about excluding certain entities, history purge, recorder settings, etc. Storing data as the native DB data types solves all these issues.

As an example, storing power monitoring data for a month (10 integer values, every 2 seconds) bloated my HA database by 4GB and severely reduced history performance. Storing those same values to a MySQL DB using native data types in the same period used about ~25MB.

1 Like

How do you propose to handle arbitrary attributes?

After updating to 112, My DB is 1.1 GB. And logbook and history are lightning fast.

:+1:

As the DB purge happens over time the size should decrease as we are storing significantly less with any stage changed event rows created with 0.112