Same here, just an empty file as an output
lost all the data, but was fine for me! thanks!
tried that too - empty file as an output
Tried to copy the DB manually from 2nd install - I’m migrating now - but that ended up with completely messed UP HA. Entered into kind-of safe mode… Had to restore from snapshot again…
Same issue and fix was an empty file. Had to toss history and start from scratch. Is there a way to recover data from the corrupted database using JupyterLab. I have not tried it, but maybe will give it a shot.
As I’m in the process of migrating from Hassio installed directly on external SSD (which is not recommended as I found out) to Raspbian installation with Hassio on docker I tried following thing.
So, now I have old setup on SSD and I’m setting up new one on Raspbian on SD card. I did follwoing:
- booted old SSD setup
- created another snapshot and downloaded it to my PC
- booted new sd card setup
- restored it with “restore selected” option (no wipe)
Now, I my history is there. I think recently I did the “wipe and restore” option… I think. But honestly - I dont know why now its here
@eddriesen thanks for this! Unfortunately like some others, I was getting a 0 byte fixed database with this method. Using the newer sqlite3 (v3.29.0+) .recover
command though, I managed to retrieve a sizable amount. Leaving this here for others (and myself inevitably) in the future:
sqlite3 ./home-assistant_v2.db ".recover" | sqlite3 ./home-assistant_v2.db.fix
mv ./home-assistant_v2.db ./home-assistant_v2.db.broken
mv ./home-assistant_v2.db.fix ./home-assistant_v2.db
Hello
Try this command that worked well for me :
sqlite3 ./home-assistant_v2_old.db “.dump” | sed -e ‘s|^ROLLBACK;( – due to errors)*$|COMMIT;|g’ | sqlite3 ./home-assistant_v2.db
This works for me too, when getting a zero size result with the dump method.
Unfortunately, in my case, the database is always corrupted after restoring a snapshot.
this gives me a “line xy database or filesystem full” error.
However, filesystem got 10g free space and db size is 1,2g so there should be plenty of space.
any idea?
I came to this error after moving from supervised installation to hassos and restoring from a snapshot. its a bit sad, if i need to expect this after each restore.
@Filoni Thanks for the info. Working like a charm.
I had to compile sqlite3.30 (since I’m on Buster on my RaspberryPi3) from source but after that initial effort, I could go from a 1.2GB (broken) to a 500MB (working fine) DB.
Great!! Working like a charm!!
“.recover” worked well for me! However, I had to copy it down to my Windows machine and do it there, because on my RPi:
“.dump” threw these errors (despite having 6.2GB free):
Error: near line 1877906: database or disk is full
and “.recover” said this:
SQL error: no such table: sqlite_dbpage
Thx again!!
Hi Filoni, thanks for the hint. Worked for me after restoring a snapshot gave me tons of errors in the logs.
Hi! Just saw this in my logs too…
Is it safe to just delete the db?
The db is just for storing values right? No settings and other stuff get delete?
Same here! After restoring a snapshot it works initially until you reboot.
If your history is sooooooo important, why don’t you use what you need to keep to an influxdb?? I have lost count of the number of times I have had to delete my database for one reason or another… It’s just history anyway…
Some care, some don´t
Hey now, we can all play nice here! It’s only Wednesday…looks like if people are bothering to try to keep their data, then they’re probably doing something with it. I’ve learned the hard way that capturing data with SQLite default unfortunately usually always fails… This sucks because the data science integration use
In my career I have analyzed a lot of data, and I’ll occasionally work with including large sqlite databases of cleaned up and/or transformed data subsets and ready for modeling, and have rarely had problems with them.
Out of curiosity do you happen to know why in HA this happens (excluding things like power failures causing corruption, etc).?
Well I think people are playing nice! I did say if you want to keep data you need to use something like influxdb. The standard database does seem to need to be deleted whenever you for example restore a snapshot. I chose to delete mine this week because I missed that the mariadb had changed the encoding… last week I found lots of entities were not updating probably because of that etc… anyway my database has nothing I NEED otherwise I’d be using influxdb to read in stuff I care about.
I tend to think it’s because it doesn’t shut down recorder when you make snapshots etc so you get a corrupted db but that’s just a guess. They also update and change the database schema a fair bit as well so maybe that can lead to corruption as well…