HASS Container to HASS OS: MariaDB migration

Hello,

First of all, sorry if this topic is solved previously, I have searched for a solution in various forums but have not found any definitly solution.
I would like to migrate from Container to OS but my problem is the migration of my MariaDB database. I had trying by using MyPHPAdmin import export option, create a dumb file of the data base… Nothing works.
Could somebody suggest any solution please? My level with db is zero.

Thanks.

Assuming you are using Windows to access the HA instance. In phpMyAdmin, highlight the database homeassistant, click on export tab and then click on Export button. This will create a file named homeassistant.sql on your local PC, in the downloads folder.

You need to import this file into the new HA instance using phpMyAdmin. Will provide instructions for import.

The sql file is a text file which has all the sql commands and data to recreate the entire database structure + contents.

See example below:

– phpMyAdmin SQL Dump

– version 5.2.0

https://www.phpmyadmin.net/

– Host: core-mariadb:3306

– Generation Time: Mar 12, 2023 at 05:43 PM

– Server version: 10.6.10-MariaDB

– PHP Version: 8.1.13

SET SQL_MODE = “NO_AUTO_VALUE_ON_ZERO”;

START TRANSACTION;

SET time_zone = “+00:00”;

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;

/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;

/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;

/*!40101 SET NAMES utf8mb4 */;

– Database: homeassistant


– Table structure for table events

CREATE TABLE events (

event_id int(11) NOT NULL,

event_type varchar(64) COLLATE utf8mb4_unicode_ci DEFAULT NULL,

event_data longtext COLLATE utf8mb4_unicode_ci DEFAULT NULL,

origin varchar(32) COLLATE utf8mb4_unicode_ci DEFAULT NULL,

origin_idx smallint(6) DEFAULT NULL,

time_fired datetime(6) DEFAULT NULL,

time_fired_ts double DEFAULT NULL,

context_id varchar(36) COLLATE utf8mb4_unicode_ci DEFAULT NULL,

context_user_id varchar(36) COLLATE utf8mb4_unicode_ci DEFAULT NULL,

context_parent_id varchar(36) COLLATE utf8mb4_unicode_ci DEFAULT NULL,

data_id int(11) DEFAULT NULL

) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

– Dumping data for table events

INSERT INTO events (event_id, event_type, event_data, origin, origin_idx, time_fired, time_fired_ts, context_id, context_user_id, context_parent_id, data_id) VALUES

(1, ‘service_registered’, NULL, NULL, 0, NULL, 1678636038.993366, ‘01GVB82ETH0V0BW4D4VPXWZF9J’, NULL, NULL, 1),

(2, ‘service_registered’, NULL, NULL, 0, NULL, 1678636038.993396, ‘01GVB82ETHYBEWJ167BD65DRC7’, NULL, NULL, 2),

(3, ‘service_registered’, NULL, NULL, 0, NULL, 1678636038.99341, ‘01GVB82ETHHF97F72VPJFZBT32’, NULL, NULL, 3),

(4, ‘service_registered’, NULL, NULL, 0, NULL, 1678636038.99342, ‘01GVB82ETH1J8Y9755HV6PQE5G’, NULL, NULL, 4),

(5, ‘component_loaded’, NULL, NULL, 0, NULL, 1678636040.798116, ‘01GVB82GJYRQ2FHFTQGMK1K3YQ’, NULL, NULL, 5),

(6, ‘component_loaded’, NULL, NULL, 0, NULL, 1678636040.801571, ‘01GVB82GK1JZ1YKAQ8NQG12E1J’, NULL, NULL, 6),

(7, ‘component_loaded’, NULL, NULL, 0, NULL, 1678636040.803127, ‘01GVB82GK3ZSPP3MHCHK14B211’, NULL, NULL, 7),

(8, ‘component_loaded’, NULL, NULL, 0, NULL, 1678636040.821159, ‘01GVB82GKNNBK3Y529G0QGDRQE’, NULL, NULL, 8),

(9, ‘component_loaded’, NULL, NULL, 0, NULL, 1678636040.821419, ‘01GVB82GKNZF9VQYFX80JVR0FF’, NULL, NULL, 9),

(10, ‘panels_updated’, NULL, NULL, 0, NULL, 1678636040.824269, ‘01GVB82GKRN9JY1848JKNGNAWP’, NULL, NULL, NULL),

(11, ‘component_loaded’, NULL, NULL, 0, NULL, 1678636040.894998, ‘01GVB82GNYVSKKHGNS8T7GGTHS’, NULL, NULL, 10),

(12, ‘service_registered’, NULL, NULL, 0, NULL, 1678636040.988953, ‘01GVB82GRWQ3N0VVB6DX6H81S5’, NULL, NULL, 11),

(13, ‘service_registered’, NULL, NULL, 0, NULL, 1678636040.989025, ‘01GVB82GRX4W29TKZWKPW9Z3JX’, NULL, NULL, 12),

(14, ‘service_registered’, NULL, NULL, 0, NULL, 1678636040.98905, ‘01GVB82GRXEWQ42JD6WXQF7JDN’, NULL, NULL, 13),

(15, ‘service_registered’, NULL, NULL, 0, NULL, 1678636040.989072, ‘01GVB82GRX0AAQV11K1WEXTWQH’, NULL, NULL, 14),

(16, ‘service_registered’, NULL, NULL, 0, NULL, 1678636040.989092, ‘01GVB82GRXN1ZPYX8SCQYR8M2J’, NULL, NULL, 15),

In the new HA instance to import, first select the homeassistant database on the left pane, then:

delete the database by selecting `drop database’

then create a new database and name it homeassistant

go to import and select the file you just exported, disable partial import and then click on the Import button.

done; please let us know if this worked for you as it might help other people in the future

1 Like

Hi! Thank you for your quick response. The details of my domestic server are:

  • Raspberry Pi 4
  • Raspbian
  • Hass Docker
  • MariaDB 10.11.2

I’m using SSH to access the server and my personal computer is a Mac.

So, could the MariaDB version be a problem?

No problem. You just need to use phpMyAdmin to move the database from one instance to another. The SQL file will be saved on your Mac.

Try the official docker image: Docker

Yes, I have the official phpMyAdmin docker image installed. This afternoon, I will try to follow your “manual” :smiley:
As soon as I have a result, I will write it here.

Ok, this is the result.
phpMyAdmin:

Add-on: phpMyAdmin
A web interface for the official MariaDB add-on

Add-on version: 0.8.3
You are running the latest version of this add-on.
System: Home Assistant OS 9.5 (aarch64 / raspberrypi4-64)
Home Assistant Core: 2023.3.2
Home Assistant Supervisor: 2023.03.1

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-phpmyadmin: starting
s6-rc: info: service init-nginx: starting
s6-rc: info: service init-nginx successfully started
s6-rc: info: service init-phpmyadmin successfully started
s6-rc: info: service php-fpm: starting
s6-rc: info: service php-fpm successfully started
s6-rc: info: service nginx: starting
s6-rc: info: service nginx successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
[16:21:28] INFO: Starting PHP-FPM…
[16:21:29] INFO: Starting NGinx…
2023/03/13 16:28:46 [error] 249#249: *33 FastCGI sent in stderr: “PHP message: PHP Fatal error: Maximum execution time of 300 seconds exceeded in /var/www/phpmyadmin/vendor/phpmyadmin/sql-parser/src/Lexer.php on line 236” while reading response header from upstream, client: 172.30.32.2, server: a0d7b954-phpmyadmin, request: “POST /index.php?route=/import HTTP/1.1”, upstream: “fastcgi://127.0.0.1:9001”, host: “192.168.0.100:8123”

Invalid config

The following integrations and platforms could not be set up:

Please check your config and logs.

:frowning:

What is the size of the export file homeassistant.sql.

Got the solution for your issue and is in the documentation.

Option: upload_limit

By default, the size limit for uploads (for operations such as imports) is set to 64MB. This can be increased with this option, for example, 100 would be 100MB.

go to the add-on and click on configuration; change upload_limit to whatever the size of your homeassistant.sql file is plus say 10MB.

It also seems this is a performance issue, the request taking too long to execute.

You need to change a value in the config file to overcome this limit. However possibly the system might take a very long time to respond (could be days or never succeed).

This is a tricky one to resolve as what file needs changing depends on implementation. You need to find the PHP files and/or NGINX files which have an identifier named mysql.connect_timeout and/or max_execution_time. You will see a value of 300, which you need to change to e.g. 6000 or 0 for infinite.

However 300 seconds is too long for SQL operations (e.g. for 1,000,000 ops it should take less than a handful of seconds to complete). Your system is extremely slow or your database consists of hundreds of millions of records.

Attempt the import by enabling Partial import. You then need to check how much data you are missing.
image

Another way is to split the SQL file into chunks, say divide it into 4 roughly equal parts and try to import them one after the other in sequence.

These are all the tables and their relationships you need to populate. You can split your SQL file into 3 chunks, linked tables on the left, linked tables on the right and unlinked tables on the right (these are the smallest).

My sql file has 160Mb. This is a to big file for phpMyAdmin?
Im only interesting to preserve datas from energy dashboard. Maybe Could I to delete any other data from db?

In case you missed my earlier message you need to do the following:

Go to add-ons and open phpMyAdmin. Then go to configuration and enable Show unused optional configuration options.

Type 200 under upload_limit and click SAVE.

Restart phpMyAdmin under Info. Your upload should not fail.

Yes, that’s the first thing I did, to increase the upload limit. :blush:
Im going to try it again by following your recomendations. Thanks.

Okay, I confirm that the problem is related to either the file size or the performance issue, as you suggested. I need more than 300 seconds to complete the task. I don’t think the split option will work since 90% of the total file size is for a single table. How can I increase the time limit beyond 300 seconds? Is it possible to locate and edit the PHP file as you suggested, in order to increase the max_execution_time in HASSOS?
I am going to switch my dedicated HASS hardware from a Raspberry Pi to an Intel Celeron NUC in approximately one month, so I think that I will probably be able to resolve this issue by following your instructions. However, for other users, it would be interesting to explore the solution further to find a way to resolve this on “low-performance” hardware such as a Raspberry Pi.

I need to dedicate a bit of time to investigate where these files are, so I need to setup a docker environment where I can go inside the container. This will take a bit of time. Will try to do this weekend.

You can split the large table into chunks and import the chunks in sequence, however, unless you have to, the effort might not be worth the benefit, given you are getting something which will address the issue very soon.

Ok, I used the split option like you suggested and it worked! :partying_face:
The program I used to split the SQL file exported by PHPMyAdmin is SQLDumpSplitter3, which is available for Linux, Windows, and MacOS. I split the database into 8 separate files, each with a file size of 20MB, but I recommend using a lower value, such as 16MB.
At the end of each import, a confirmation message should be displayed indicating that the import operation was successful.

By the way, I apologize for my English. :grimacing:
I hope this has been useful to someone.

Glad it worked🙂. Very useful utility I knew nothing about. Well done! I suggest you do an export from both systems and you use a file compare utility to make certain the datasets match, or where they do not match it makes sense.

Great suggestion, but unfortunately it’s too late for me. :slight_smile:
At the moment, everything is working perfectly, even after migrating the MariaDB from the Hassio RPi to the Hassio NUC system by restoring either a complete backup or a partial backup of only the MariaDB add-on. Both options worked perfectly.

Thanks for this helpful post. I just converted from HAOS to HA Docker and needed to move my data over. This worked out great!

How did you manage to do this because docker doesn’t support add ons?
I used HA in a docker on my Synology NAS, but because of missing features I’m switching to HA OS on Virtual Machine Manager. Everthing works, only issue is moving the historical energy data.