How to migrate to a faster database without losing data?

Hi everyone, i have a quite big HomeAssistant istance (in terms of number of sensors) and i just discovered that LiteSQL is not that performing and so it’s suggested to change to an external database (like MariaDB), but i also understood that migrating database would mean losing all data until now.
As i would like to keep at least energy dashboard historical data and i got that keeping 2 databases is not encouraged… how can i migrate my database to an external one and keep working on the newest one?

Just an FYI to search a bit further in this forum, the apparent lack of performance of SQLite has received quite some comments.
Now, I donot know your setup and maybe mariadb is ‘faster’ but it all depends on what you do with it. Maybe you should detail what you lack from your SQLite performance?

I myself installed mariadb as I wanted to be more at ease in an environment that I know, have not noticed any difference in speed…possibly this is measurable but on-screen… nah

Well i do not notice that “lack of performance”, so it would be just an extra for the future… but if the difference is not noticeable i think i won’t even try migrate to MariaDB

The Home Assistant database structure has become a bit more complex in recent updates to HA. If you are somewhat skilled with SQL, you can probably find a tool that will copy both the structure and data from a SQLite database to a Maria or Postgres database (what I use), with minimal data loss (when HA is down for conversion). I think the steps would be to copy the HA SQLite db somewhere safe, setup your Maria server, do the Home Assistant yaml steps to aim your HA system at your Maria server. Then let HA start and create it’s empty db structure on your Maria server. Then shut down HA, copy the data from the copy of the SQLIte database tables to Maria db tables. Restart HA and cross your fingers. When I moved to Postgresql, I just accepted the loss of my current data history. Not ideal, but that was two years ago and I now have almost 1 TB of HA history (note, I copy my HA history to separate Postgresql tables for long history and just let HA trim it’s history in Postgresql at 60 days).

Your alternative would be to use a Influx database or similar integration for history. Many folks have success with this, however this becomes a parallel database to maintain and learn.

With the expanding functions of HA’s energy and other historical statistics working with your data outside HA is requiring more careful work. Both due to the increasing complexity of HA’s tables and SQLites limits. Back to your original question about SQLite speed, SQLite is light weight (as it’s name indicates) but not really a good performer for large datasets and somewhat fragile if you try to access it from outside HA while HA is running.

I have been very happy with my ‘Rube Goldberg’ setup of using triggers to copy inserts to the core HA tables by HA to my parallel history tables in Postgresql. As HA increases it’s native history and stats functions, combined with a strong db server like Maria or Postgresql, hopefully ‘hacks’ such as my route will be unnecessary.

Good hunting!

1 Like

I run Postgres on Mt Nas. So dB is over the network. Now for almost a year. Never discover any lack. Because the database is on my nas I run also a tool.to backup the databases. When HA crash the database is still there running.

That is another benefit I forgot, yet another one is that you (well, I) am easier capable to use one mariadb to run other (non HA) processes on which is much less obvious on the sqlite one.