Because the optimisation by the HA team for SQLite was huge I wanted to switch back from MAriaDB to SQLite…
I finally succeeded in switching (back) from MariaDB to SQLite and keeping all states/events and statistics. (I’m on 2023.4.1 , the database scheme is 41 )
I used HeidiSQL, connected to the MariaDB and
1.export (with data: ‘replace existing data’) all seperate tables (only DATA, no table ).
-
change configuration.yaml,
recorder:
#db_url (now sqlite is the defaultDB) -
delete file ‘home-assistant_v2.db’ if it exist in the config folder.
-
Using ssh: ha core stop.
-
(ssh): ha core start , now a new ‘home-assistant_v2.db’ is created. After HA is started directly:
-
ha core stop. Now in HeidiSQL open ‘home-assistant_v2.db’
-
clear all DATA rows from all the TABLES
-
create in table EVENTS a key named ‘created’ (without quotes) with datatype ‘datetime’, in table STATES create a key named ‘created’ with datatype ‘datetime’ and a key named ‘domain’ with datatype ‘varchar’ and length 64.
-
in notepad++ (which I used) edit the file states.sql: and find&replace every _binary 0x018725fccf354f12f37776356bc7c8b5 (a MariaDB binary blob, this is just an example hex number) to a SQLite blob literal: X’018725fccf354f12f37776356bc7c8b5’
use in notepad++ search mode= regular expression, in the find field: (_binary\s0x)(\w*) and this in the replace field: X’\2’
and choose replace all.!
-
repeat the steps from 9 on the file events.sql
-
import all previously exported and in NP++ changed files in HeidiSQL with ‘File…Run SQL file’.
-
If everything was succesfull (REMEMBER that you start with all data removed from all the tables) you will now see all the tables filled again with the same data as was contained in MariaDB.
-
The keys created in step 8 can now be deleted (SQLite doesn’t use them) but I didn’t and everything went fine.
-
close HeidiSQL
-
(make sure the home-assistant_v2.db is the newly filled one in the config folder of home assistant.
-
In SSH: ha core start
-
All statistiscs/history/states are back & SQLite is database again.
NB. If the exported tables ‘states.sql’ or ‘events.sql’ are very big then NP++ may take a long time to find&replace.


