I upgraded to 2022.5.0 today and it looks like the in-memory option is not longer supported. Is there a alternate method to keep the recorder information in memory?
While I appreciate that the Home Assistant team is (finally) giving some attention to the excessive writing it used to do, I do feel like disabling in-memory sqlite is kind of abrupt.
Maybe it isn’t the best solution, but we are not given an alternative other than letting HA take control over the lifespan of my SSD (or SD card) again. (I’ve sacrificed 2 SD cards and 1 SSD in the past and I didn’t even want all that data)
Hi All, I also ran into this with the recent 2022.5.0 update and after playing around a bit trying to mount a ramdisk, I ended up just storing the database in /dev/shm which seems to be working.
Not sure how it’ll work over time but if you were using 'sqlite:///:memory' the following recorder config seems to get it back into memory on the 2022.5.0 release:
recorder:
db_url: 'sqlite:////dev/shm/ha-recorder-v2.db'
[ other options ]
I’m new to Home Assistant (fantastic software) so not sure how this will work over time. My setup (PI4/4GB/SSD dedicated to HA) has the data going to the InfluxDB add-on for longer term storage.
Just to be sure, is any recorder or logger config change required to fully benefit from the newly optimized data writing? I did not see anything that would suggest it in your code changes, but I won’t rule out an oversight on my end.
Also, does this change then allow to somehow purge the events table? I still get an error message when I want to empty the events table via phpmyadmin.
#1701 - Cannot truncate a table referenced in a foreign key constraint (`homeassistant`.`states`, CONSTRAINT `states_ibfk_1` FOREIGN KEY (`event_id`) REFERENCES `homeassistant`.`events` (`event_id`))
Or would I need to force empty it once because of old way of writing and can then safely empty it in the future?
Ciao Denilson, many thanks for your contribution. I’m not only a newbie in the HA world but, in general, I’ve alos a very low background in IT topics… but I’ve a big passion and I’m curious and willing to learn. Then, as soon as I landed on your guide, I began following it. I’d say I almost succeded in the first part of it but I’m stuck in adding the scan_interval to the file-dimension sensor. I installed “file size” via the integrations user interface, after some tries (the path I was putting was not allowed or invalid) I did it adding the line
because I input the db path during the integration set-up in the user interface.
Then, my reasoning is: if Denilson added the filesize sensor via configuration.yaml, in the sensors’ section, I’m assuming that my filesize sensor has been “written” somewhere else…so,where do I find it in order to add the scan_interval option?
Thanks for any suggestion anyone of the community will give to me!
Hi! The reason is simple… I wrote this guide around version 2021.4, but starting on version 2022.4, the File Size integration is now available to set up from the UI. Anything being configured through the UI is saved inside /config/.storage/, and those files are automatically managed by Home Assistant itself.
As you can see, there were plenty of changes, and I need to update/adapt the guide to the latest HA version. I’m just lacking time to do so, as real-life stuff gets priority. To make it worse, any time I end up dedicating to my HA installation has been updating it to the latest version and trying to debug this high CPU usage issue. I still haven’t found any solution for it, and I spent many more hours than I wanted.
Thanks for this guide. Here’s an updated version of the Viewing states usage query that addresses the move of attributes to the another shared table.
SELECT
COUNT(*) AS cnt,
COUNT(*) * 100 / (SELECT COUNT(*) FROM states) AS cnt_pct,
SUM(LENGTH(shared_attrs)) AS bytes,
SUM((LENGTH(shared_attrs)) * 100) / (SELECT SUM(LENGTH(shared_attrs)) FROM state_attributes) AS bytes_pct,
entity_id
FROM states, state_attributes
WHERE
states.attributes_id = state_attributes.attributes_id
GROUP BY entity_id
ORDER BY cnt DESC
I haven’t dug into it, but the percentage from attributes will total to more than 100% as the attributes can be shared across states.
If no one has made a Jupyter notebook available with these queries, I may publish one. It is a lot easier to dig into this data there.
aqara_plug_fridge is a zigbee smart plug connected to the fridge
em_channel_* are the sensors exposed
processor use and memory use are self explanatory
mijia_plug_salamusica_ng is another smart plug (in a less “important” position)
In my case, it would be great the possibility to limit the data saved “per entity”, since I’m not really interested in 10 days of fridge power consumption but 2-3 days would be enough.
I, still, would keep 10 days of records for the other entities.
Thanks to everyone in this thread for such an interesting topic!
Thank you @denilsonsa for this guide.
I have read this thread many times but still have problems with my Database.
I had an install of HA with final Database size of 53GB. I have decided to make a fresh install and restored my back up. After reinstall my Database grew 1.1GB in first day. So i followed your guide and identified main causers of this. Afterwards I have applied filter to my configuration.yaml and issued a purge+repack.
Now my Database is growing slower but still like 200MB/day.
After applying Purge&Repack my Database show NULL for all Values. Is there a way to fix it?
I’m having the same issue, the recorder is not purging and mine is 70gb already.
I tried reducing the purge_keep_days to 5, calling the purge service and it has no effect. There’s still months of history.
I’m worried I’ll lose my energy long term stats.
Is there any trick I’m missing?
I’d be ok clearing the db if I could keep my long term statistics, the energy stats at least.
Any tips?
I see the disk went to 100% full last night, and then back to 90%.
I’m starting to suspect that HA needs more free disk to complete the purge. Could that be the case?
Is there any standard approach to tackle this without losing all my (precious) long term stats?
I’m very glad to see the database getting some developer attention lately, and I remain hopeful that we’ll be seeing great things in the future. That said, I personally wouldn’t consider the HA database a good repository for anything “precious.” If it were up to me, I’d have put the long-term energy statistics in their own database, or maybe even export them to simple csv files. There are lots of other tools which are good at manipulating and analyzing statistics. Getting the data out of HA would be the first step.
I updated the query, because I got unexpected results from rct’s query (like percentages over 100%).
SELECT
entity_id,
total_count,
100 * total_count / SUM(total_count) OVER () AS perc_count,
total_bytes,
100 * total_bytes / SUM(total_bytes) OVER () AS perc_bytes,
(total_count * 60 * 60) / (unixepoch() - least_update) AS count_per_hour,
(total_bytes * 60 * 60) / (unixepoch() - least_update) AS bytes_per_hour,
datetime(least_update, 'unixepoch', 'localtime') AS available_since
FROM
(
SELECT
S.entity_id,
COUNT(1) AS total_count,
SUM(LENGTH(shared_attrs)) AS total_bytes,
MIN(last_updated_ts) AS least_update
FROM
state_attributes AS SA
INNER JOIN
states AS S
ON
(SA.attributes_id=S.attributes_id)
GROUP BY
S.entity_id
) AS A
ORDER BY
7 DESC
You should always have 125% of the database size in temporary space available for temp tables, schema changes, rebuilds, repairs, optimize runs, etc. Anything less and you risk running out of space or the system failing in some way.
If you are running MySQL < 10.6.9 the query can take so long that it doesn’t purge (see this MariaDB bug report https://jira.mariadb.org/browse/MDEV-25020). There has also been an optimization in 2023.2.3 to reduce the purge time.
Finally if you are running MariaDB < 10.5.17 there are other known issues with purging / performance that will likely never get fixed in the 10.5 or below series.
Thanks for the clear explanation!
I am on SQLite, btw. I’ll copy over the db to a proper pc and see if it cleans it overnight (after manually calling an purge run).