I pulled the new docker image some time this afternoon and recreated and restarted my instance. Since then my HA docker container is stuck at
2021-04-08 23:31:48 WARNING (MainThread) [homeassistant.bootstrap] Waiting on integrations to complete setup: recorder
The blog warned that the migration could take multiple minutes depending on DB size and platform CPU. Now I retain a year or recorder data on a MariaDB running on my NAS, independent from the RaspbPi 4 running my HA instance. However, while I expected the migration to take dozens of minutes maybe even a couple of hours, it’s at this point been running for almost half the day.
Is this still a reasonable timeframe or did something go wrong an I should try something else instead?
but it took about 2 hours for me, and I’m running on a NUC with i5 and 8GB ram. So I would give it some time, but maybe some others know more about it.
I am also half-a-day in - There must at least be some feedback as to progress.
HA is prone to getting stuck in loops, and the database is very prone to getting corrupt - how are we meant to tell what’s going on with no feedback, and no (as I understand it) way of interrogating what is going on?
Looks like it took about 24 hours.
It wasn’t constrained by CPU or Memory - it was using very little resources - just slow to do whatever it is doing, with no feedback.
Took like 36 hours or so, but appears to have completed successfully. Didn’t see any CPU or Memory constrains on my DB host, but didn’t check IO. It is using spinning NAS drives, so possible that that was the bottleneck.
Regardless, would be great to have some progress indicator somewhere for the next migration. Or even be able to run the migration with the system online so I don’t have to sit in the dark for multiple days because none of my automations are working.
Same for me. I have my history sitting on MariaDB on Synology NAS (RS2418+) and my HA is VM on ESXi server. I’ve seen virtualy no CPU consumption on HA side nor on Synology side. There was no network traffic congestion (up to 5% between ESXi server and NAS). No excessively high disk transfer rates (very low reads up to 400KBps, somehow higher writes 100MBps, but in range of 20% of what volume is able to handle). HOWEVER there was one parameter that was saturated - volume usage. During conversion it was constantly at +95% range. Volume is SSD RAID 0, so is quite capable of handling high transfer rates or IOPS… do not understand how it could be saturated.
This all sounds very problematic for my config. I’m hosting VM on esxi with a docker based mariadb addon that has a 32gb database. Upgrade has been busy for 7 hours now. Sounds like a week upgrade or so. I’ll stop the upgrade and restore from backup tomorrow and wait until the migration process is verbose or quicker.
Using the default sqlite config, my 1GB db took less than 60s.
This is on a fast quad core system with ssd, there was 100% core saturation during that time, with some corresponding disk io.