yip, tried to force check and nothing, front end also says no updates
Iāve been running MariaDB on my Synology NAS for some considerable time now with no issues. Looking at the package on the NAS itās version 10.3.29 - the latest version available for the Synology NAS Iām using. Itās not one of the versions listed. Should it still work?
These things take time. stop hammering the server and it will probably happen faster.
Many of us like updates and trying new stuff, but hammering at it is counter-productive.Relax, have a cup of tea instead of those strong coffees
haha, you got me spot on drinking coffee but I am not hammering. I tried force update once only.
Yep, but you and many others probably.
Such is distributed rollout. Itāll happen
For those using Xiaomi-BLE sensors, this release also adds support for the Xiaomi BLE door, opening and motion sensors.
Older versions of MariaDB will still work but may hit scale problems once you have > 1GiB of data or low ram (YMMV, it can happen at smaller or larger numbers depending on how fast your disk is). For best performance you want to be running MariaDB 10.6.9 or later (addon version 2.5.2 or later).
If you have a lot of data in the database the older versions may take multiple minutes or hours to answer history queries or purge the database.
The big risk is that you have a long running history query or purge still running in the background when doing the update to 2023.2.0 since there is a database migration that can fail if the long running query has the database locked.
You can check the https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html before updating to make sure there are no long running queries if you are running an older version before updating to make sure you wonāt hit the issue during updating (that will not prevent it from happening during production later, but the consequences are much lower if it happens after the migration).
For the iotwatt breaking change, Iām using the Accumulated sensor for each circuit in the Energy dashboard. When Iāll change the entities in the Energy dashboard with newly created integrators, will it breaks/deletes historical data?
I got an error in the log during startup, should i be concerned?
Iām running mariadbā¦
mariadb --version
mariadb Ver 15.1 Distrib 10.10.2-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper
And here is the beginning of the errorā¦
2023-02-01 22:02:03.496 ERROR (DbWorker_0) [homeassistant.components.recorder.util] Error executing query: (MySQLdb.OperationalError) (1054, "Unknown column 'events.time_fired_ts' in 'field list'")
[SQL: WITH anon_1 AS
(SELECT anon_2.context_id AS context_id
FROM (SELECT events.context_id AS context_id
FROM events LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id
WHERE events.time_fired_ts > %s AND events.time_fired_ts < %s AND events.event_type IN (%s, %s, %s) AND (JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s)) UNION ALL SELECT states.context_id AS context_id
FROM states FORCE INDEX (ix_states_entity_id_last_updated_ts)
WHERE states.last_updated_ts > %s AND states.last_updated_ts < %s AND states.entity_id IN (%s)) AS anon_2 GROUP BY anon_2.context_id)
SELECT events.event_id AS event_id, events.event_type AS event_type, events.event_data AS event_data, events.time_fired_ts AS time_fired_ts, events.context_id AS context_id, events.context_user_id AS context_user_id, events.context_parent_id AS context_parent_id, event_data.shared_data AS shared_data, %s AS state_id, %s AS state, %s AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only
FROM events LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id
WHERE events.time_fired_ts > %s AND events.time_fired_ts < %s AND events.event_type IN (%s, %s, %s) AND (JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s)) UNION ALL SELECT %s AS event_id, %s AS event_type, %s AS event_data, states.last_updated_ts AS time_fired_ts, states.context_id AS context_id, states.context_user_id AS context_user_id, states.context_parent_id AS context_parent_id, %s AS shared_data, states.state_id AS state_id, states.state AS state, states.entity_id AS entity_id, CASE JSON_EXTRACT(CAST(state_attributes.shared_attrs AS CHAR), %s) WHEN 'null' THEN NULL ELSE JSON_UNQUOTE(JSON_EXTRACT(CAST(state_attributes.shared_attrs AS CHAR), %s)) END AS icon, CASE JSON_EXTRACT(CAST(states.attributes AS CHAR), %s) WHEN 'null' THEN NULL ELSE JSON_UNQUOTE(JSON_EXTRACT(CAST(states.attributes AS CHAR), %s)) END AS old_format_icon, %s AS context_only
FROM states FORCE INDEX (ix_states_entity_id_last_updated_ts) LEFT OUTER JOIN states AS old_state ON states.old_state_id = old_state.state_id LEFT OUTER JOIN state_attributes ON states.attributes_id = state_attributes.attributes_id
WHERE states.last_updated_ts > %s AND states.last_updated_ts < %s AND old_state.state_id IS NOT NULL AND states.state != old_state.state AND states.state IS NOT NULL AND ((states.entity_id NOT LIKE %s AND states.entity_id NOT LIKE %s AND states.entity_id NOT LIKE %s) OR ((states.entity_id LIKE %s) AND (state_attributes.shared_attrs NOT LIKE %s OR states.attributes NOT LIKE %s))) AND (states.last_updated_ts = states.last_changed_ts OR states.last_changed_ts IS NULL) AND states.entity_id IN (%s) UNION ALL SELECT events.event_id AS event_id, events.event_type AS event_type, events.event_data AS event_data, events.time_fired_ts AS time_fired_ts, events.context_id AS context_id, events.context_user_id AS context_user_id, events.context_parent_id AS context_parent_id, event_data.shared_data AS shared_data, %s AS state_id, %s AS state, %s AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only
FROM anon_1 LEFT OUTER JOIN events FORCE INDEX (ix_events_context_id) ON anon_1.context_id = events.context_id LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id UNION ALL SELECT %s AS event_id, %s AS event_type, %s AS event_data, states.last_updated_ts AS time_fired_ts, states.context_id AS context_id, states.context_user_id AS context_user_id, states.context_parent_id AS context_parent_id, %s AS shared_data, states.state_id AS state_id, states.state AS state, states.entity_id AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only
FROM anon_1 LEFT OUTER JOIN states FORCE INDEX (ix_states_context_id) ON anon_1.context_id = states.context_id ORDER BY time_fired_ts]
[parameters: (1675198923.423, 1675285323.466992, 'script_started', 'automation_triggered', 'logbook_entry', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."device_id"', '$."device_id"', '"5f1ea538772ee4762fd1943f09aa925e"', 1675198923.423, 1675285323.466992, 'sensor.gmail_lokanx_important', 0, None, None, None, None, None, 1675198923.423, 1675285323.466992, 'script_started', 'automation_triggered', 'logbook_entry', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."device_id"', '$."device_id"', '"5f1ea538772ee4762fd1943f09aa925e"', None, None, None, None, '$."icon"', '$."icon"', '$."icon"', '$."icon"', None, 1675198923.423, 1675285323.466992, 'counter.%', 'proximity.%', 'sensor.%', 'sensor.%', '%"unit_of_measurement":%', '%"unit_of_measurement":%', 'sensor.gmail_lokanx_important', 0, None, None, None, None, '1', None, None, None, None, None, None, '1')]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
2023-02-01 22:02:03.499 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection] [139842420118368] Error handling message: Unknown error (unknown_error) from 192.168.1.92 (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36)
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb.OperationalError: (1054, "Unknown column 'events.time_fired_ts' in 'field list'")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/websocket_api/decorators.py", line 27, in _handle_async_response
await func(hass, connection, msg)
File "/usr/src/homeassistant/homeassistant/components/logbook/websocket_api.py", line 379, in ws_event_stream
last_event_time = await _async_send_historical_events(
File "/usr/src/homeassistant/homeassistant/components/logbook/websocket_api.py", line 106, in _async_send_historical_events
message, last_event_time = await _async_get_ws_stream_events(
File "/usr/src/homeassistant/homeassistant/components/logbook/websocket_api.py", line 171, in _async_get_ws_stream_events
return await get_instance(hass).async_add_executor_job(
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/logbook/websocket_api.py", line 202, in _ws_stream_get_events
events = event_processor.get_events(start_day, end_day)
File "/usr/src/homeassistant/homeassistant/components/logbook/processor.py", line 164, in get_events
return self.humanify(yield_rows(session.execute(stmt)))
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1714, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/lambdas.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1054, "Unknown column 'events.time_fired_ts' in 'field list'")
[SQL: WITH anon_1 AS
(SELECT anon_2.context_id AS context_id
FROM (SELECT events.context_id AS context_id
FROM events LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id
WHERE events.time_fired_ts > %s AND events.time_fired_ts < %s AND events.event_type IN (%s, %s, %s) AND (JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s)) UNION ALL SELECT states.context_id AS context_id
FROM states FORCE INDEX (ix_states_entity_id_last_updated_ts)
WHERE states.last_updated_ts > %s AND states.last_updated_ts < %s AND states.entity_id IN (%s)) AS anon_2 GROUP BY anon_2.context_id)
SELECT events.event_id AS event_id, events.event_type AS event_type, events.event_data AS event_data, events.time_fired_ts AS time_fired_ts, events.context_id AS context_id, events.context_user_id AS context_user_id, events.context_parent_id AS context_parent_id, event_data.shared_data AS shared_data, %s AS state_id, %s AS state, %s AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only
FROM events LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id
WHERE events.time_fired_ts > %s AND events.time_fired_ts < %s AND events.event_type IN (%s, %s, %s) AND (JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) AS CHAR) IN (%s) OR JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IS NOT NULL AND CAST(JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) AS CHAR) IN (%s)) UNION ALL SELECT %s AS event_id, %s AS event_type, %s AS event_data, states.last_updated_ts AS time_fired_ts, states.context_id AS context_id, states.context_user_id AS context_user_id, states.context_parent_id AS context_parent_id, %s AS shared_data, states.state_id AS state_id, states.state AS state, states.entity_id AS entity_id, CASE JSON_EXTRACT(CAST(state_attributes.shared_attrs AS CHAR), %s) WHEN 'null' THEN NULL ELSE JSON_UNQUOTE(JSON_EXTRACT(CAST(state_attributes.shared_attrs AS CHAR), %s)) END AS icon, CASE JSON_EXTRACT(CAST(states.attributes AS CHAR), %s) WHEN 'null' THEN NULL ELSE JSON_UNQUOTE(JSON_EXTRACT(CAST(states.attributes AS CHAR), %s)) END AS old_format_icon, %s AS context_only
FROM states FORCE INDEX (ix_states_entity_id_last_updated_ts) LEFT OUTER JOIN states AS old_state ON states.old_state_id = old_state.state_id LEFT OUTER JOIN state_attributes ON states.attributes_id = state_attributes.attributes_id
WHERE states.last_updated_ts > %s AND states.last_updated_ts < %s AND old_state.state_id IS NOT NULL AND states.state != old_state.state AND states.state IS NOT NULL AND ((states.entity_id NOT LIKE %s AND states.entity_id NOT LIKE %s AND states.entity_id NOT LIKE %s) OR ((states.entity_id LIKE %s) AND (state_attributes.shared_attrs NOT LIKE %s OR states.attributes NOT LIKE %s))) AND (states.last_updated_ts = states.last_changed_ts OR states.last_changed_ts IS NULL) AND states.entity_id IN (%s) UNION ALL SELECT events.event_id AS event_id, events.event_type AS event_type, events.event_data AS event_data, events.time_fired_ts AS time_fired_ts, events.context_id AS context_id, events.context_user_id AS context_user_id, events.context_parent_id AS context_parent_id, event_data.shared_data AS shared_data, %s AS state_id, %s AS state, %s AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only
FROM anon_1 LEFT OUTER JOIN events FORCE INDEX (ix_events_context_id) ON anon_1.context_id = events.context_id LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id UNION ALL SELECT %s AS event_id, %s AS event_type, %s AS event_data, states.last_updated_ts AS time_fired_ts, states.context_id AS context_id, states.context_user_id AS context_user_id, states.context_parent_id AS context_parent_id, %s AS shared_data, states.state_id AS state_id, states.state AS state, states.entity_id AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only
FROM anon_1 LEFT OUTER JOIN states FORCE INDEX (ix_states_context_id) ON anon_1.context_id = states.context_id ORDER BY time_fired_ts]
[parameters: (1675198923.423, 1675285323.466992, 'script_started', 'automation_triggered', 'logbook_entry', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."device_id"', '$."device_id"', '"5f1ea538772ee4762fd1943f09aa925e"', 1675198923.423, 1675285323.466992, 'sensor.gmail_lokanx_important', 0, None, None, None, None, None, 1675198923.423, 1675285323.466992, 'script_started', 'automation_triggered', 'logbook_entry', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."entity_id"', '$."entity_id"', '"sensor.gmail_lokanx_important"', '$."device_id"', '$."device_id"', '"5f1ea538772ee4762fd1943f09aa925e"', None, None, None, None, '$."icon"', '$."icon"', '$."icon"', '$."icon"', None, 1675198923.423, 1675285323.466992, 'counter.%', 'proximity.%', 'sensor.%', 'sensor.%', '%"unit_of_measurement":%', '%"unit_of_measurement":%', 'sensor.gmail_lokanx_important', 0, None, None, None, None, '1', None, None, None, None, None, None, '1')]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
Youāll need to post the entire query and traceback to give you a good answer (or open a github issue)
Edit: Thanks for posting the full query. The logbook isnāt expected to be available until the migration is done. The error should not occur again once the migration is done. If it does happen after the migration is completed, please open a github issue.
Any query to Assist only results in Unexpected error. Restart didnāt fix the issue.
I updated the post above with the hole shabang from the log, worth mention is that database migration is still ongoing.
Makes sense
Thanks for the answer
Update seems to have broken the Honeywell integration. I deleted and tried to re-install and got:
Config flow could not be loaded: {"message":"Invalid handler specified"}
It doesnāt look like the āassistā icon is showing up, even after updating. Any idea on how I can make sure itās enabled? Thanks
I really appreciate this release
Great work - and setting some expectations
Glad - that the team decided to implement state-color variablesā¦ even if it was just mentioned on a side node, I think thatās a thing on UI improvements for many who suffered from the changes end of last yearā¦ (even, if most of them might now redo their UI changesā¦)
Also great work with assistā¦ still need to figure out some of the issues I had during testing, but thatās probably just because I havenāt really set anything up so far.
Another big improvement in my case is the ability to convert Wh into other units without the need of template sensors
So - yes, for me, that was a release worth waiting for
tried to reload your ui?
I saw there was some work done on the Honeywell integration in github; and it was merged, but I did not see any mention in Breaking Changes or Other Noteworthy Changes. So, I donāt know if your issue is related - you might want to open an issue on github.
p.s. thanks for catching this - Iām going to wait to upgrade since I donāt want my Honeywell integration to break
I did an issue on Github. Just wanted to āspamā it so I was sure to get someoneās attention.
yup, I did