Hopefully someone can help me. My database gets corrupted every night at 4:12 during the auto purge. I did a integrity check using DB Browser for SQLite and it found 3 issues.
My primary concern is my long term energy data I need to conserve, the other data points I can live with losing.
Conceptually Iād say I have 2 options, either fix this DB or add the energy data to the new DB.
Is there anyone that can help me fix this, ideally with a step by step approach. I have some computer/IT/programming skills, however databases are new to me.
After some more searching/trying/failing I found a solution that worked for me. This is largely based on solution provided in other threads, however for me some intermediate steps were unclear, so Iāll share what I did in a way that makes sense to me (for others like me or future-me when this happens again).
First things first, HA runs as VM on a windows NUC without monitor/keyboard, so I use remote desktop if I need to access it. Furthermore, important to have SAMBA or something similar setup to get files on/off HA.
I could have done this on the NUC via remote desktop, but chose to use my laptop.
What I did
Locate and copy your corrupted DB from HA to your laptop/PC
Place the .exe file from SQLite and the corrupted DB in the same folder
Open up Command Prompt (cmd) and change directory (cd) to the folder the DB is in
Run this command (on 1 line, donāt split it, the ā|ā is correct) and then WAIT and in my case, ignore the error messages you see (important Iāll explain why this threw me off) sqlite3 ./home-assistant_v2.db ā.recoverā | sqlite3 ./home-assistant_v2.db.fix
Where ./home-assistant_v2.db is your corrupted DB and ./home-assistant_v2.db.fix the name of your new fixed DB
Once this is done, you know because you can enter a new CMD line, you should have a new DB in your folder
Before you delete your corrupted DB in HA and replace it with the fixed one, you should stop HA Core. I did this by going to developer tools > Services, searched for Home Assistant Core Integration: Stop and ran that
Now using SAMBA I copied the fixed DB to the HA Folder, deleted the corrupted one and renamed the fixed DB to home-assistant_v2.db
I then restarted my HA instance via the VM (your interface wouldnāt work because you stopped Core. This could probably also be achieved via SSH, but I didnāt have that on my laptop, so went for the easy solution
For a couple of days now, the nightly purge happens without corrupting the DB, so fingers crossed this fixed for me, for now.
Lastly, why is the WAIT and ignore errors important? Very quickly after executing the cmd, I got an error/warning about a purge error on lineXXX and then nothing else. I thought that meant it stopped and my froze and I clicked away the CMD window. Going through this a couple of time and not understanding why it worked for others and not me, I got distracted by something and didnāt immediately close the CMD window. A while later, I got back to my laptop and saw that the cmd finished with a handful of similar error messages, and that is when I understood while it looked frozen, it actually continued to run.
Now I was afraid the errors meant I still had a bad DB, but tried to put it back anyway (nothing to lose kind of situation) and turned out, it worked.
Let me know if you have any questions or comments about the write up.
All the best!
Yours just fixed my issue and I am very grateful for this! (not loosing 12 months of historical energy data)
EDIT: Just for the recordā¦
I just did one thing differently: I did not use the SQLite.exe but logged into my HA host with putty and used the sqlite3 of the HA host.
You are my hero too!
Did exactly the same, Copied the sqlite command from your post and paste it to my shell, had an error regarding the double quotes, they where incorrected pasted hahaha.
Hi.
Iāve recognized that since around 3 days the recording of values was not sucessful only between around 04:00 and 14:00 oclock - until yesterday when it completely stopped working ā¦
Then I did exactly the same as peterevers described. Got errors like this during execution
Parse error near line 123: object name reserved for internal use: sqlite_stat1
Parse error near line 215454045: no such table: sqlite_stat1
Parse error near line 215454046: no such table: sqlite_stat1
Parse error near line 215454047: no such table: sqlite_stat1
After about 2 hours and 20 of that iterations it was finished. Copied back to HA via winscp - and after rebooting HA VM everything is now working fine. Lets see if that will stay like this.
Brilliant ! Thank you.
Just recovered over 18months worth of history.
Used to get the DB corrupt issue quite regularly with a Pi and too large a history retention time.
Moved to a NUC and everything hunky dory until 4:12 this morning. Only just found your post so lost most of todayās data but well worth it.
Saw some other posts about swapping to MariaDB so might look into that, but this was a life saver.
Thanks once again
I am trying to fix my database.
Every time a repack is performed (scheduled monthly or manually), the database gets corrupted and starts from scratch, forcing me to restore a pre-repack backup.
I am trying the procedure described here, and indeed the database gets corrected, as shown in the screenshot below. However, once I replace the file in HA and restart the system, the database is detected as corrupted again.
Any ideas?
Hi Guys this saved me some months ago and now I ran into the same problem again. I since learned and do a daily backup to SMB now so I could recover (it seems) with only the current day of lost data.
Did you stop the core integration before copying? This is crucial. I tested it without and it got corrupted instantly.
hello everyone, this week my database got corrupted again, last time with the method described here I had saved the mess and everything was back to working without problems, but now the .recovery does not complete, so I am stuck. I also downloaded DBbroswer and I can see the data in the corrupted one. As far as I understand the data of the energy dashboard (I am interested in saving at least that) are contained in the statistics file, is it possible to insert the saved statistics file in the new database with sql3? for now I have exported the table in .csv
No way. The repack destroys my database.
In September, I decided to start fresh with a clean database, setting the purge to 90 days.
Until today, everything was fine, and the database was only 4GB. Even the repack, scheduled every second Sunday of the month, worked smoothly, and I noticed the database size decreasing significantly.
However, last night (second sunday of december), it got corrupted again.
I tried downloading the file, repairing it with SQLite, and re-uploading it after stopping the core. Upon restarting, the freshly uploaded database is immediately marked as corrupted, and HA starts with a new database.
Even if I manually start the purge action by selecting only
action: recorder.purge
data:
repack: true
the database gets corrupted.
For now, Iāve restored yesterdayās backup, but Iām sure Iāll face the same issue again in a month.
Could the problem be related to the 90-day data retention setting? I could try reducing it, but even so, the next purge will likely delete the entire database.