How to keep your recorder database size under control

I’m looking forward to all the changes! It’s great to see the Recorder database getting some developer love!

Speaking of FRs, there have been a number of them requesting variations on the theme of making it possible to retain some events and state changes longer than others, rather than have just one retention period for all (or exclude the entity altogether.) Here’s the one I submitted, but it’s a subject which comes up a lot, with a number of (good) ideas on how to implement it.

Is something along these lines in consideration?

Thank you!!

Don’t take it personally mate, most of the mods here seem to say that any time you suggest any feedback at all no matter how much you kiss up to them. I just praised the 2022.5 changes on the discord and mentioned one thing I considered unnecessary then got called “massively negative”.

I think you’ve got some very good points personally and I have noticed the sqlite database starting to grow out of control lately. @bdraco thanks for working on it! Much appreciated.

2 Likes

Wow, I had to look this up on discord because I was curious who you talked to and what you said. Do you think this is the best way to start a convo as your first post on discord?

image

TBH, it seems like you just said that to pick a fight. What do you expect?

1 Like

We can talk about this in PM if you like, don’t want to derail the thread.

There’s nothing to talk about, I’m just baffled that you think this is positive feedback. Nothing you said in discord was positive, you just started with a sarcastic assumption. Making your previous post a lie just to make the mods look bad. I just don’t get it. Why lie?

Why not share the whole conversation rather than one message taken out of context, and intentionally leaving out the bit where I said I liked everything else in 2022.5?

Anyway, I shall endeavour to not feed the troll, despite the fact it’s a mod. Won’t say anything else about it.

Sure, i’ll glady highlight everything too. Lets see:

You start with a scarastic assumption.

Call beginners plebs…

Have 1 small compliment.

Continue to bash the work…

Yes, The mods here were totally uncalled for asking you to be nicer.

1 Like

Hi, I don’t want to take sides, but the last 6 messages (7, including this one) are completely off-topic for this thread (which is already getting quite long). For the sake of keeping things on-topic, maintaining a good sinal-to-noise ratio, and making it easier to read for people in the future, I suggest moving these 7 messages (this one included) to another topic somewhere else. Thanks!

7 Likes

Enjoy in 2022.6.x

10 Likes

My sdcard and I thank you @bdraco ! Would give 2 likes if I could.

Truly looking forward to it :heart_eyes:

Wow! It’s been a long time since I looked forward to a version update. You’re restoring my faith!

1 Like

I’ve been using the following to keep my recorder information in memory to avoid excess writing to my SD card:

recorder:
  purge_keep_days: 1
  db_url: 'sqlite:///:memory:'

I upgraded to 2022.5.0 today and it looks like the in-memory option is not longer supported. Is there a alternate method to keep the recorder information in memory?

Unfortunately the in-memory sqlite does not have proper locking between threads which leads to random failures so its not longer available.

Although it is not technically supported, you can probably work around it by creating a ramdisk instead.

While I appreciate that the Home Assistant team is (finally) giving some attention to the excessive writing it used to do, I do feel like disabling in-memory sqlite is kind of abrupt.
Maybe it isn’t the best solution, but we are not given an alternative other than letting HA take control over the lifespan of my SSD (or SD card) again. (I’ve sacrificed 2 SD cards and 1 SSD in the past and I didn’t even want all that data)

2 Likes

Hi All, I also ran into this with the recent 2022.5.0 update and after playing around a bit trying to mount a ramdisk, I ended up just storing the database in /dev/shm which seems to be working.

Not sure how it’ll work over time but if you were using 'sqlite:///:memory' the following recorder config seems to get it back into memory on the 2022.5.0 release:

recorder:
  db_url: 'sqlite:////dev/shm/ha-recorder-v2.db'
  [ other options ]

I’m new to Home Assistant (fantastic software) so not sure how this will work over time. My setup (PI4/4GB/SSD dedicated to HA) has the data going to the InfluxDB add-on for longer term storage.

Hope that helps a little!

4 Likes

Just to be sure, is any recorder or logger config change required to fully benefit from the newly optimized data writing? I did not see anything that would suggest it in your code changes, but I won’t rule out an oversight on my end.

Also, does this change then allow to somehow purge the events table? I still get an error message when I want to empty the events table via phpmyadmin.

#1701 - Cannot truncate a table referenced in a foreign key constraint (`homeassistant`.`states`, CONSTRAINT `states_ibfk_1` FOREIGN KEY (`event_id`) REFERENCES `homeassistant`.`events` (`event_id`))

Or would I need to force empty it once because of old way of writing and can then safely empty it in the future?

For those who want to have their database in memory, I would like to suggest this add-on, it works great without issue.

2 Likes

Ciao Denilson, many thanks for your contribution. I’m not only a newbie in the HA world but, in general, I’ve alos a very low background in IT topics… but I’ve a big passion and I’m curious and willing to learn. Then, as soon as I landed on your guide, I began following it. I’d say I almost succeded in the first part of it but I’m stuck in adding the scan_interval to the file-dimension sensor. I installed “file size” via the integrations user interface, after some tries (the path I was putting was not allowed or invalid) I did it adding the line

allowlist_external_dirs:
  - /config

I didn’t add

sensor:
  - platform: filesize
    file_paths:
      - /config/home-assistant_v2.db

because I input the db path during the integration set-up in the user interface.
Then, my reasoning is: if Denilson added the filesize sensor via configuration.yaml, in the sensors’ section, I’m assuming that my filesize sensor has been “written” somewhere else…so,where do I find it in order to add the scan_interval option?

Thanks for any suggestion anyone of the community will give to me!

Hi! The reason is simple… I wrote this guide around version 2021.4, but starting on version 2022.4, the File Size integration is now available to set up from the UI. Anything being configured through the UI is saved inside /config/.storage/, and those files are automatically managed by Home Assistant itself.

Additionally, since version 2021.6, you can disable polling updates on any integration through the UI. For such cases, if you want to change the polling interval, you have to disable the built-in automatically polling and manually create an automation to force one or more entities to update.


As you can see, there were plenty of changes, and I need to update/adapt the guide to the latest HA version. I’m just lacking time to do so, as real-life stuff gets priority. To make it worse, any time I end up dedicating to my HA installation has been updating it to the latest version and trying to debug this high CPU usage issue. I still haven’t found any solution for it, and I spent many more hours than I wanted.

1 Like

Thanks for this guide. Here’s an updated version of the Viewing states usage query that addresses the move of attributes to the another shared table.

SELECT
  COUNT(*) AS cnt,
  COUNT(*) * 100 / (SELECT COUNT(*) FROM states) AS cnt_pct,
  SUM(LENGTH(shared_attrs)) AS bytes,
  SUM((LENGTH(shared_attrs)) * 100) / (SELECT SUM(LENGTH(shared_attrs))  FROM state_attributes) AS bytes_pct,
  entity_id
FROM states, state_attributes
WHERE
  states.attributes_id = state_attributes.attributes_id
GROUP BY entity_id
ORDER BY cnt DESC

I haven’t dug into it, but the percentage from attributes will total to more than 100% as the attributes can be shared across states.

If no one has made a Jupyter notebook available with these queries, I may publish one. It is a lot easier to dig into this data there.

2 Likes