DB image malformed. How to fix it?

Yes but so you lost the history

is there no chance to install sqlite on the hass.io raspberry image?!

i’m also stucked in this

Faced same issue
But after deleting home-assistant_v2.db (renamed) Im unable to connect to my HA over SSH and HTTP, even its pingable
Any suggestions?

UPD: First issue on connected monitor - “Failed to mount HassOS overlay partition”

UPD2: Fixed with checking filesystem tool (ExtFS Paragon for MacOS)

If you are using hassio then sqlite3 can be installed from the CLI of the docker instance itself (install the ssh add on from the hass web interface) using

apk add sqlite

then you can continue and follow the instructions given by @eddriesen

5 Likes

Little update;
After another corrupt db and bad overall performance (i have a LOT of sensors) i just went with mysql which was allready installed and used on the host system. Much faster at rendering history now!

Thanks @redlegoman!
meanwhile I figured out annother way (inspired by the posts of @eddriesen and @antoweb76:

  1. Stop HA via the SSH CLI using hassio ha stop
  2. Move the database-file to my windows computer using samba
  3. download sqlite CLI for windows from https://www.sqlite.org/download.html
  4. perform sqlite3 ./home-assistant_v2.db_old ".dump" | sqlite3 ./home-assistant_v2.db_fix
  5. move the new db-file back using samba
  6. rename the db-file (remove the _fix part)
  7. re-start HA via the SSH CLI using hassio ha start

Your solution seems even more simple though and I managed to install sqlite the way you mentioned. I’ll try it next time :smirk:

3 Likes

@eddriesen - could you expand please? It looks to me like the engine is installed within the docker as I had to install it to fix the database.

1 Like

I ended up with a zero size ‘fix’ file so I’m guessing the whole file is goosed!

I went with an external mysql database instead of the sqlite one. Is a bit snappier (history,…) and no so to prone to errors.

1 Like

Thanks, saved my logs after a restore as well. This took about 3-5 minutes on a Raspberry Pi 4 with a 1GB Sqlite DB. Resulting DB was 22 MB, so I guess it threw a lot of corrupted data out.

For those struggling with the “install sqlite3 in the homeassistant container” - no need on Hass.io (or whatever it’s being called now.)

If you enable the root login (don’t bother with the SSH addon, it doesn’t get you access to the low level OS the way the real root login does), you can do the fixes from the main shell of the system.

The config directory is available in /mnt/data/supervisor/homeassistant

See https://developers.home-assistant.io/docs/operating-system/debugging#ssh-access-to-the-host for instructions on how to enable SSH root logins.

On a recent release, here’s how I did the recovery.

Prerequisites:

Enable SSH access to the host per procedure linked above.

Step 1: Login to the host

ssh -l root -p 22222 hassio

It will bring up the hassio-cli interpreter.

Step 2: Stop Hassio

Type core stop and wait for it to complete.

Step 3: Change directories to the data dir

  • Type login to get the root prompt.
  • Then, run
    cd /mnt/data/supervisor/homeassistant/

Step 4: Recover the SQLite database

Run the following, then grab a cup of coffee/tea/beer.

sqlite3 ./home-assistant_v2.db ".dump" | sqlite3 ./home-assistant_v2.db.fix
mv ./home-assistant_v2.db ./home-assistant_v2.db.broken
mv ./home-assistant_v2.db.fix ./home-assistant_v2.db

Step 5: Get back into the hassio-cli
Type hassio-cli to re-enter the interpreter

Step 6: Restart Home Assistant

core start

Validation:

Exit out of the hassio-cli by typing exit, then watch the homeassistant container’s logs:

docker logs --follow homeassistant
9 Likes

Same here, just an empty file as an output :frowning_face:

lost all the data, but was fine for me! thanks!

tried that too - empty file as an output :frowning:
Tried to copy the DB manually from 2nd install - I’m migrating now - but that ended up with completely messed UP HA. Entered into kind-of safe mode… Had to restore from snapshot again…

Same issue and fix was an empty file. Had to toss history and start from scratch. Is there a way to recover data from the corrupted database using JupyterLab. I have not tried it, but maybe will give it a shot.

As I’m in the process of migrating from Hassio installed directly on external SSD (which is not recommended as I found out) to Raspbian installation with Hassio on docker I tried following thing.
So, now I have old setup on SSD and I’m setting up new one on Raspbian on SD card. I did follwoing:

  • booted old SSD setup
  • created another snapshot and downloaded it to my PC
  • booted new sd card setup
  • restored it with “restore selected” option (no wipe)

Now, I my history is there. I think recently I did the “wipe and restore” option… I think. But honestly - I dont know why now its here :slight_smile:

@eddriesen thanks for this! Unfortunately like some others, I was getting a 0 byte fixed database with this method. Using the newer sqlite3 (v3.29.0+) .recover command though, I managed to retrieve a sizable amount. Leaving this here for others (and myself inevitably) in the future:

sqlite3 ./home-assistant_v2.db ".recover" | sqlite3 ./home-assistant_v2.db.fix
mv ./home-assistant_v2.db ./home-assistant_v2.db.broken
mv ./home-assistant_v2.db.fix ./home-assistant_v2.db
14 Likes

Hello
Try this command that worked well for me :

sqlite3 ./home-assistant_v2_old.db “.dump” | sed -e ‘s|^ROLLBACK;( – due to errors)*$|COMMIT;|g’ | sqlite3 ./home-assistant_v2.db

This works for me too, when getting a zero size result with the dump method.

Unfortunately, in my case, the database is always corrupted after restoring a snapshot.

this gives me a “line xy database or filesystem full” error.
However, filesystem got 10g free space and db size is 1,2g so there should be plenty of space.
any idea?

I came to this error after moving from supervised installation to hassos and restoring from a snapshot. its a bit sad, if i need to expect this after each restore.

@Filoni Thanks for the info. Working like a charm.
I had to compile sqlite3.30 (since I’m on Buster on my RaspberryPi3) from source but after that initial effort, I could go from a 1.2GB (broken) to a 500MB (working fine) DB.