Migrating home assistant database from sqlite to mariadb.. ONLY if you are VERY familiar with Database administration

I bet this is not normal. Data is not getting compressed during migration and in terms of storage size it should probably almost be equal in MySQL to SQLite.

Maybe not helpful, but: did you think about shrinking your database? Must be either plenty of data perhaps because of a hell of devices (or chatty ones) or a very huge purge_keep_days setting (Recorder - Home Assistant).

I monitored all the migrations done here and I can’t remind one user having such a huge database. Not sure where the limitations are, maybe “size of database equals required amount of RAM” or something.

Hi,

My Sqlite3 DB keep growing! 1.2GB per week!!!

I don’t know who is responsible for this increase. I don’t have that many devices. Below I show my settings for the Recorder. But even with Auto_purge and Auto_Repack the DB didn’t shrink. I don’t know if I’m missing something.

Strange is that sqlite3mysql does not return any error, so I don’t know what is going wrong in order to be able to correct or work around it.

What I wanted was to move to MariaDB, to have records of most data for about 30 days, but some sensors keep for 1 year or even more. It would be great for me to be able to compare some data or get annual minimum and maximum values for some sensors.
I didn’t want to use InfluxDB, I preferred to have the data in a single DB.
Is there any way to get it?

recorder:
  purge_keep_days: 365
  auto_repack: true
  auto_purge: true
  commit_interval: 2

  exclude:
    domains:
      - device_tracker
      - media_player
      - uptime
      - time_date
    entity_globs:
      - sensor.date*
      - sensor.time*
      - sensor.uptime*
      - sensor.fibergateway*
      - sensr.tasmota_wifi_connect_count*
      - sensor.tasmota_mqtt_connect_count*
      - sensor.tasmota_last_restart_time*
      - sensor.tasmota_restart_reason*
      - sensor.tasmota_ssid*
    entities:
      - sensor.last_boot
      - sensor.load_15m
      - sensor.load_1m
      - sensor.load_5m
      - sensor.memory_free
      - sensor.memory_use
      - sensor.memory_use_percent
      - sensor.network_in_eth0
      - sensor.network_out_eth0
      - sensor.processor_use
      - sensor.gpu_temperature
      - sensor.pocophone_1_detected_activity 
      - sensor.battery_level 
      - sensor.pocophone_2_detected_activity 
      - sensor.pocophone_2_battery_level
        

Given that your problem has to do with the growth of your sqlite database, and much less to do with exporting your database, You might want to open a new topic in the setup section of the home assistant forum. The configuration or installation sub forums may give you better results for an answer to your issue.

Regards

1 Like

Hi, I encountered the same error message. Which SQL command did you use to fix this? Thanks a lot!

EDIT: Nevermind - found it:

  1. Connect to mariadb on the terminal within HA (command “mysql”)
  2. Then type “ALTER TABLE events MODIFY event_type varchar(64)”

Hi,

I truncate de DB and now have 24GB.

Test one more time, with DEBUG and get an error “MemoryError”

sqlite3mysql --sqlite-file home-assistant_v2.db --mysql-database homeassistant --mysql-host localhost  --mysql-port 3307 --mysql-user root  --ignore-duplicate-keys  -t states --debug


2023-01-10 16:55:11 INFO     Transferring table states
Traceback (most recent call last):
  File "c:\users\tiser\appdata\local\activestate\cache\6da97e55\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "c:\users\tiser\appdata\local\activestate\cache\6da97e55\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\Scripts\sqlite3mysql.exe\__main__.py", line 7, in <module>
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\sqlite3_to_mysql\cli.py", line 204, in cli
    converter.transfer()
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\sqlite3_to_mysql\transporter.py", line 772, in transfer
    self._transfer_table_data(sql=sql, total_records=total_records)
  File "C:\Users\tiser\AppData\Roaming\Python\Python39\site-packages\sqlite3_to_mysql\transporter.py", line 662, in _transfer_table_data
    self._sqlite_cur.fetchall(),
MemoryError

I don’t know what can I do to transfer states table to MariaDB…

In the post with instructions that you are using is the following statement regarding memory errors:

(optional, include --chunk INTEGER to run the migration in chunks of records. This may help large datasets with lower resources.)

I don’t know the values you need to use for this option are though. Experimentation will certainly work to find a value suitable to your needs.

Regards

1 Like

I used --chunk 1000 just in case
my sqlite size was 7GB (30d)
I ran the conversion on a Virtualbox in a windows 10 laptop with i5-8350U and it took around 2.5 h

Afterwards I changed to purge_keep_days: 190
It has been a month since.
initially the DB size was around 6.5GB for around 2 weeks
Then it jumped to 10GB and stays at 10GB for the last 10 days…

Check recorder repack next to recorder purge (see docs).

Just FYI, I had to run this to install sqlite3-to-mysql. This is a more reliable command anyway.

python3 -m pip install sqlite3-to-mysql

@eddiewebb , can you update the instructions, please?

HA OS 9.4, Core 2023.1.6, Terminal & SSH 9.6.1, Python 3.10.9-r0

1 Like

Hi guys. I’ve just found this thread as yesterday my SQLite db corrupted and upon investigation I found it was a whopping 24GB. Luckily I have backups of my entire HASSOS vm (I use proxmox + proxmox backup server) so I was able to get online and only lost a few hours of data.

Ive got a quick question - to migrate to maria from the existing SQLite with no data loss, can I not just:

  1. Create a home assistant backup
  2. Delete my HASS OS + set up a new instance
  3. Setup the new instance to use Maria and not SQLite
  4. Import the backup - the data should automatically go into the Maria db instance.

Right?

Isn’t all this far easier than the migration instructions given in this thread?

Disclaimer - this could totally be off base as I’m not at all familiar with how all this works under the hood as yet but it seems wise to use the built in tools to achieve that the goal.

Is there a simple way to monitor the size of the Sqlite Db and get some kind of notification if it goes over a certain size? I could write a small script I suppose but wondering what others are doing to avoid these footguns

No. HA expects the default database in the default setup. Migration is needed.

Of course. For the SQLite (default database) you can simply use the File Size - Home Assistant integration. Just create a sensor and use it in an automation to notify yourself.

1 Like

Worked a treat. Thank you!

With the new updates in HAOS is there a good reason to change to mariadb or are they now nearly “the same”?

2 Likes

Since converting to MariaDB last week I notice that the times in my log are not correct. Regularly events get reported in the future and a lot of events happen at the same time. I can’t see anything in the log files. Any idea what the problem may be?
image
[screenshot at 19:17…]

Wrong time settings?
Check How to set time?

Otherwise:

Then, check whether ntpd-service is running and what time it is.

Tx. Will give that a try.

I had the exact same issue, wierd timestamps in the future for all events, a lot at the same time. Time settings for Home Assistant were just fine.

I managed to fix this by changing the column type for last_changed_ts and last_updated_ts in the states table from float to double.

2 Likes

That seems to have fixed it. Thanks!