Long term statistics on a separate database

My current sqlite db has problems and I can’t delete it without losing statistics. Migrating.the 16 Gb to mysql is a real PITA. I’m completely lost how to solve this bug

“problems” is not going to get you help.

The normal solution to a database “problem” is to delete it and ha creates a new one. It also deletes statistics. Thats why I voted here.
I wasted many days trying to find the cause, also tried to migrate 16Gb to mysql which is a real challenge (it would take days to import).
A second database for statistics would solve it. Or an option in HA to delete all tables except statistics. If I try that through sqlite3 home assistant recreates a complete new database…

Edit: with trial and error on the new recorder purge options I managed to trim down the sqlite3 database from 16 to 1 Gb. Hopefully the new purge options keep the database down in size and tolerable for restores (mine caused havoc once I tried to restore it from a backup file). I really hope home assistant makes some changes on how statistics are stored, I would prefer an external mysql or mariadb. The current setup makes me very unhappy once I need to rely on restores from HA backup files, my SSD will fail sooner or later.

Has anyone tried to use the rrd-tool custom integration?

Guess if I had’nt been a fool and bought myself a gigantic home-server - I’d killed off my Pi by now :slight_smile:

tracking DB-size and having a few extra sensors with LTS, I see myself growing into a problem in the future :wink:
LTS 17.feb22 2,3 mill rows
LTS 12.dec23 5,5mill rows

Trust that the HA-team have a plan, and this FR might be a solution. Have seen lots of improvement in DB performance, and really glad I stuck to SQLite when everyone screamed movetomariadb!

PS, the peak in states the last few weeks was a zigbee-device starting to report 12 entitees about every second, adding +10mill rows to states!
( Solved DB-issue by @bdraco using his brilliant service: recorder.purge_entities , then corrected z2m-reporting settings to avoid that many data-points)

@ArveVM - Love those sensors that are detailing down to the table, you should include those in your post for others that might want to do the same

Sorry, thought I had added the link :yum:
Screenshot, card-code and conf-yaml

Couldn’t agree more!!
Lost 2 years of energy consumption data because of a corrupted database.
Nobody wants to loose all long-term statistics.
Splitting the db in two would be the perfect solution.

3 Likes

Did you try leaving the long terms stats and just deleting the non long term stuff?

hello,
thank you for your reply.
What do you mean ?? Is this possible ??

I think that my long term data was corrupted as after 29/12 they were not stored anymore.
I managed to restore a backup of 28/12 and it is working again. I lost 25 days of consumption data but it is much better that loosing 2 years …

I still have the corrupted database-
I wish someone could tell me if we could correct the corrupted row

thank for the reply