2023.2: How can I Assist?

This is super easy.
I too migrated from the outdated Synology solution to a Docker container with MariaDB 10.10.2.

I setup the new DB using the same names an credentials as I did on my Synology NAS before.
Then did the following steps:

  • shut down Home Assistant
  • connect to old DB via HeidiSQL (freeware)
  • do a SQL export of the “homeassistant” DB (including database and table creation scripts)
  • connect to new DB via HeidiSQL
  • execute the previously exported SQL script via HeidiSQL
  • change the connection string in Home Assistant config
  • start Home Assistant
1 Like

I’m not saying it’s discouraged, I’m saying it’s a 'use at your own risk because you’re accessing things that aren’t meant to be touched". It’s like the unofficial CSS attributes for the front end. Use them all you want, but they may change. I’d argue the database is in the same position. Use it at your own risk, but it may change. However, it is documented. So it probably should have been listed as a breaking change that could have affected SQL.

1 Like

I started the upgrade to 2022.02 around 6pm yesterday and it completed sometime overnight. Some upgrades go smoothly, but I’m afraid this wasn’t one of those. I am using the MariaDB addon (on a Home Assistant Blue). The HA core update was quick, but it resulted in the “critical” MariaDB update warning being prominently displayed in the settings screen. I had been on MariaDB addon v2.5.1 and it wanted me to update to patch version v2.5.2. I’m in the habit of responding promptly to critical warnings, so I started the update. Perhaps I should have spent 30 mins reading this thread first.

Here’s a summary of my bumpy ride, which ultimately ended well. Maybe it will help another user to be patient and not to nuke their DB. I’m not posting this to be negative. I just want to leave it here as reassurance to anyone who is in the same situation and is considering drastic action against their database.

The addon update process took about 2 hours, during which I had a spinner icon over the MariaDB entry on the updates page. During these two hours HA core seemed to restart itself twice. The third time it restarted, the update seemed to be complete with the addon showing the new version, so I continued to update any other addons that needed it, and then did a HA core restart. When the HA came back up, I had the notification about a database update being in progress and that system performance might be degraded. Any attempt to restart HA was met with an error that restarts are not permitted while a database update is in progress. This was naturally confusing, because I could verify that the MariaDB addon update was complete. After some hunting around the forums, I found that if I restarted the MariaDB addon it would allow me to restart the system, but that after each reboot the database update notification was back. I was pretty convinced that I was looking at a bug, and that something was setting this DB update flag erroneously. Some old posts on the forums reported that nuking the database and starting over was the only way forward.

I don’t care much about historical sensor data, but I do care about long-term statistics and energy stats, so I decided to look at exporting the DB before attempting to destroy and recreate it. I then noticed that the DB was about 18GB. This size seems to be the product of lots of sensors and a recorder history of 100 days. I have various entities excluded from the recorder history, but DB size has never really been an issue on the Home Assistant Blue with MariaDB, so I don’t maintain the list of excluded entities much. I decided that doing a mysqldump on 18GB would probably take many hours, so I instead just reconfigured my recorder history to 30 days and went to bed, hoping the DB size would be a bit more manageable in the morning. Thankfully, this morning the whole problem seems to have gone away.

On reflection, I think that HA core must have been running a data migration over the database, independently and completely unconnected to the MariaDB critical update, but I couldn’t find anything concrete online to support this idea (with a better idea of what to look for, I now see a post in the developer blog about a schema update). There was also nothing you might expect in the UI, like a progress bar, to indicate a long-running data migration was happening. That it was timed alongside a major critical DBMS upgrade made the notification message very confusing, and led me to conclude that the MariaDB update had corrupted itself somehow. Probably the only thing that prevented me from doing something drastic like destroying my database and starting over was the fact that it was 11pm and I wanted to go to bed.

So my takeaway from this is that if I ever see the “database update in progress” notification I should know that it means there is a background data migration running deep down in HA Core and that it might take hours or a few days to complete, and to just be patient and wait for it to finish before attempting to proceed with any other updates that might require a restart.

I wonder if a root cause of this is that development and testing instances of HA are likely to have smaller databases, and that databases in the order of 18GB are likely only to be found in long-running production instances. If the database was one or two orders of magnitude smaller, the data migration would be very quick, and no expectation of UI feedback on progress would arise.

1 Like

Just upgraded from 2023.2.0 to 2023.2.1 and none of my custom lovelace cards (installed via HACS) are loading anymore…

Anywhere that I have a custom card on the dashboard it shows “Custom element doesn’t exist:” and then card name…

Can’t find anything pertinent in the logs…

EDIT: Appears to be chrome related (running Version 109.0.5414.119); and yes closing and restarting chrome fixes it.

So I’m getting 2 database related errors, see below.

Logger: homeassistant.components.recorder.core
Source: components/recorder/migration.py:993
Integration: Recorder (documentation, issues)
First occurred: 12:35:43 (1 occurrences)
Last logged: 12:35:43

Database error during schema migration
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
    res = self._query(query)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
    db.query(q)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
    _mysql.connection.query(self, query)
MySQLdb.OperationalError: (1205, 'Lock wait timeout exceeded; try restarting transaction')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/recorder/core.py", line 742, in _migrate_schema_and_setup_run
    migration.migrate_schema(
  File "/usr/src/homeassistant/homeassistant/components/recorder/migration.py", line 154, in migrate_schema
    _apply_update(hass, engine, session_maker, new_version, current_version)
  File "/usr/src/homeassistant/homeassistant/components/recorder/migration.py", line 856, in _apply_update
    _migrate_columns_to_timestamp(session_maker, engine)
  File "/usr/src/homeassistant/homeassistant/components/recorder/migration.py", line 993, in _migrate_columns_to_timestamp
    result = session.connection().execute(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/future/engine.py", line 280, in execute
    return self._execute_20(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20
    return meth(self, args_10style, kwargs_10style, execution_options)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
    return connection._execute_clauseelement(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
    ret = self._execute_context(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
    util.raise_(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
    raise exception
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
    res = self._query(query)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
    db.query(q)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
    _mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction')
[SQL: UPDATE states set last_updated_ts=IF(last_updated is NULL,0,UNIX_TIMESTAMP(CONVERT_TZ(last_updated,'+00:00',@@global.time_zone)) ), last_changed_ts=UNIX_TIMESTAMP(CONVERT_TZ(last_changed,'+00:00',@@global.time_zone)) where last_updated_ts is NULL LIMIT 250000;]
(Background on this error at: https://sqlalche.me/e/14/e3q8)



Logger: homeassistant.components.recorder.util
Source: components/recorder/util.py:130
Integration: Recorder (documentation, issues)
First occurred: 12:35:23 (1 occurrences)
Last logged: 12:35:23

Error executing query: (MySQLdb.OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction') [SQL: UPDATE states set last_updated_ts=IF(last_updated is NULL,0,UNIX_TIMESTAMP(CONVERT_TZ(last_updated,'+00:00',@@global.time_zone)) ), last_changed_ts=UNIX_TIMESTAMP(CONVERT_TZ(last_changed,'+00:00',@@global.time_zone)) where last_updated_ts is NULL LIMIT 250000;] (Background on this error at: https://sqlalche.me/e/14/e3q8)

Here is the PR

Bump pymodbus library to V3.1.0 by janiversen · Pull Request #85961 · home-assistant/core (github.com)

That means you restarted HA without waiting for the database migration to finish.

So the migration may still be working or you caused an issue where the lock hasn’t been released. If you restarted your hardware, you most likely corrupted the database. It would probably be worth while to restore from a back up and try again. But be patient and wait for the migration to complete.

Of all the times it had failed i can only imagine the update to 2023.2.1 to have caused a reboot, all the other times not to my knowledge. Also not sure how long it should take it takes hours and hours :frowning:

Yes, it’s a database migration, if you have a large database, it can take hours.

Anyone else having issues with Synology DSM?
I get “TypeError: ClientSession._request() got an unexpected keyword argument ‘max_version’”

3 Likes

Hi
Has there been any changes in relation to the synology integration?
Since the last update, I do have trouble with this integration
I haven’t done any changes on synology side, which runs under DSM 7.1.1 - 42962 update 3
Appreciate any help


Logger: homeassistant.config_entries
Source: components/synology_dsm/common.py:94 
First occurred: 13:36:28 (1 occurrences) 
Last logged: 13:36:28

Error setting up entry Ds216Garbsen for synology_dsm
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 382, in async_setup
    result = await component.async_setup_entry(hass, self)
  File "/usr/src/homeassistant/homeassistant/components/synology_dsm/__init__.py", line 68, in async_setup_entry
    await api.async_setup()
  File "/usr/src/homeassistant/homeassistant/components/synology_dsm/common.py", line 94, in async_setup
    await self.dsm.surveillance_station.update()
  File "/usr/local/lib/python3.10/site-packages/synology_dsm/api/surveillance_station/__init__.py", line 25, in update
    raw_data = await self._dsm.get(self.CAMERA_API_KEY, "List", max_version=7)
  File "/usr/local/lib/python3.10/site-packages/synology_dsm/synology_dsm.py", line 207, in get
    return await self._request("GET", api, method, params, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/synology_dsm/synology_dsm.py", line 282, in _request
    response = await self._execute_request(request_method, url, params, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/synology_dsm/synology_dsm.py", line 308, in _execute_request
    response = await self._session.get(url, params=params, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 919, in get
    self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
TypeError: ClientSession._request() got an unexpected keyword argument 'max_version'

Ok, it’s definitely related to 2023.2
I just rolled back and restored from my backup and synology integration is back to work.
So I‘ll wait wit upgrade till next release. Let’s hope the reason will be discovered soon.
Anyway, no big deal, thanks to backup

1 Like

That’s nothing compared to mine and there is no way to upgrade it.
It is on my Synology, for RAID reasons and it is version 10.3.32-1040
This is the latest version available according to them.

I couldn’t do the migration, the index can’t be created, too many rows in the states table.

I was also sad, when my Android tablet stopped working properly after last update of HA :frowning:

Hm…
I’m not sure what i risk by creating “select” in sql (other than not working sensors when devs decide to change db without warning…) ?I’m not changing anything in db, i’m only reading.
Some things are either hard to get in HA or not possible at all. I searched, but i didn’t find yet an alternativ HA solution for my sql needs: to get MIN and MAX temperature from midnight 'till current time. Min/max for last xy hours is not useable for me.

1 Like

Hm. How many do you actually have? Mine is ~1.5M rows for 2 weeks of “detailed” history…

For actual long-term detailed history, I use influxdb. I was keeping 2 weeks for dashboard graphs, but now that statistics are mostly used, I’m not sure it’s even necessary to keep so many days.

The risk is that the tables change and you have to update your query

I think - and that’s only my personal opinion:
The HA Devs do listen to what people want.

Just take the response about state-icon colors from the December release into account and the huge amount of “Why not making the icons configurable?”

→ That has now be implemented. And honestly: way faster than I could imagine.

In addition to that:
HA is Opensource. Meaning, in theory everyone can participate and start implementing what he think HA should provide.

But: There’s also a company behind this project, with people working full time - and they want to be payed for doing the job.
So - NabuCasa needs to taking into account what technologies could be usefull to people.
People that might not want to write yaml configs, they don’t want to spent time in front of their smartphone to trigger something.
There is a huge Userbase that want Voice Control - even those that haven’t found the way to HomeAssistant yet.
And - if only a certain amount of those peoples would sign up for the NabuCasa Cloud, that’s what pays the devs.

So… the team needs to get their way through all of these reasons.
And while we might not agree to some decisions, others do.

Personally, I really looking forward to the voice controll - and time will tell if this will replace my current solution with echo / google… etc.

3 Likes

Clear your browser cache.

1 Like

Way to much, 24M rows.
I know, it is stupid, I’m not doing any purge. I wanted to see for a year.
But those 24M are just the last 8 months