Can't filter logbook after upgrading to 2022.6

Hi All,

I’m hoping this is an easy fix, I have my recorder logging to MariaDB so everything is in a SQL database. After upgrading to 2022.6 it would appear I can’t filter logbook by entity. When I go onto the logbook I can see all entries but can’t filter down and it logs the below in the logs. I have upgraded to 2022.6.4 and still have the same issue.

Logger: homeassistant.components.recorder.util
Source: components/recorder/util.py:95
Integration: Recorder (documentation, issues)
First occurred: 09:21:38 (1 occurrences)
Last logged: 09:21:38

Error executing query: (MySQLdb._exceptions.OperationalError) (1176, "Key 'ix_states_entity_id_last_updated' doesn't exist in table 'states'") [SQL: WITH anon_1 AS (SELECT anon_2.context_id AS context_id FROM (SELECT events.context_id AS context_id FROM events LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id WHERE events.time_fired > %s AND events.time_fired < %s AND events.event_type IN (%s, %s, %s) AND (JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IN (%s) OR JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) IN (%s)) UNION ALL SELECT states.context_id AS context_id FROM states FORCE INDEX (ix_states_entity_id_last_updated) WHERE states.last_updated > %s AND states.last_updated < %s AND states.entity_id IN (%s)) AS anon_2 GROUP BY anon_2.context_id) SELECT events.event_id AS event_id, events.event_type AS event_type, events.event_data AS event_data, events.time_fired AS time_fired, events.context_id AS context_id, events.context_user_id AS context_user_id, events.context_parent_id AS context_parent_id, event_data.shared_data AS shared_data, %s AS state_id, %s AS state, %s AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only FROM events LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id WHERE events.time_fired > %s AND events.time_fired < %s AND events.event_type IN (%s, %s, %s) AND (JSON_EXTRACT(CAST(event_data.shared_data AS CHAR), %s) IN (%s) OR JSON_EXTRACT(CAST(events.event_data AS CHAR), %s) IN (%s)) UNION ALL SELECT %s AS event_id, %s AS event_type, %s AS event_data, states.last_updated AS time_fired, states.context_id AS context_id, states.context_user_id AS context_user_id, states.context_parent_id AS context_parent_id, %s AS shared_data, states.state_id AS state_id, states.state AS state, states.entity_id AS entity_id, CASE JSON_EXTRACT(CAST(state_attributes.shared_attrs AS CHAR), %s) WHEN 'null' THEN NULL ELSE JSON_UNQUOTE(JSON_EXTRACT(CAST(state_attributes.shared_attrs AS CHAR), %s)) END AS icon, CASE JSON_EXTRACT(CAST(states.attributes AS CHAR), %s) WHEN 'null' THEN NULL ELSE JSON_UNQUOTE(JSON_EXTRACT(CAST(states.attributes AS CHAR), %s)) END AS old_format_icon, %s AS context_only FROM states FORCE INDEX (ix_states_entity_id_last_updated) LEFT OUTER JOIN states AS old_state ON states.old_state_id = old_state.state_id LEFT OUTER JOIN state_attributes ON states.attributes_id = state_attributes.attributes_id WHERE states.last_updated > %s AND states.last_updated < %s AND old_state.state_id IS NOT NULL AND states.state != old_state.state AND states.state IS NOT NULL AND ((states.entity_id NOT LIKE %s AND states.entity_id NOT LIKE %s AND states.entity_id NOT LIKE %s) OR ((states.entity_id LIKE %s) AND (state_attributes.shared_attrs NOT LIKE %s OR states.attributes NOT LIKE %s))) AND (states.last_updated = states.last_changed OR states.last_changed IS NULL) AND states.entity_id IN (%s) UNION ALL SELECT events.event_id AS event_id, events.event_type AS event_type, events.event_data AS event_data, events.time_fired AS time_fired, events.context_id AS context_id, events.context_user_id AS context_user_id, events.context_parent_id AS context_parent_id, event_data.shared_data AS shared_data, %s AS state_id, %s AS state, %s AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only FROM anon_1 LEFT OUTER JOIN events FORCE INDEX (ix_events_context_id) ON anon_1.context_id = events.context_id LEFT OUTER JOIN event_data ON events.data_id = event_data.data_id UNION ALL SELECT %s AS event_id, %s AS event_type, %s AS event_data, states.last_updated AS time_fired, states.context_id AS context_id, states.context_user_id AS context_user_id, states.context_parent_id AS context_parent_id, %s AS shared_data, states.state_id AS state_id, states.state AS state, states.entity_id AS entity_id, %s AS icon, %s AS old_format_icon, %s AS context_only FROM anon_1 LEFT OUTER JOIN states FORCE INDEX (ix_states_context_id) ON anon_1.context_id = states.context_id ORDER BY time_fired] [parameters: (datetime.datetime(2022, 6, 8, 7, 0, tzinfo=datetime.timezone.utc), datetime.datetime(2022, 6, 8, 8, 21, 38, 969347, tzinfo=datetime.timezone.utc), 'logbook_entry', 'script_started', 'automation_triggered', '$."entity_id"', '"group.adults"', '$."entity_id"', '"group.adults"', datetime.datetime(2022, 6, 8, 7, 0, tzinfo=datetime.timezone.utc), datetime.datetime(2022, 6, 8, 8, 21, 38, 969347, tzinfo=datetime.timezone.utc), 'group.adults', None, None, None, None, None, None, datetime.datetime(2022, 6, 8, 7, 0, tzinfo=datetime.timezone.utc), datetime.datetime(2022, 6, 8, 8, 21, 38, 969347, tzinfo=datetime.timezone.utc), 'logbook_entry', 'script_started', 'automation_triggered', '$."entity_id"', '"group.adults"', '$."entity_id"', '"group.adults"', None, None, None, None, '$."icon"', '$."icon"', '$."icon"', '$."icon"', None, datetime.datetime(2022, 6, 8, 7, 0, tzinfo=datetime.timezone.utc), datetime.datetime(2022, 6, 8, 8, 21, 38, 969347, tzinfo=datetime.timezone.utc), 'counter.%', 'proximity.%', 'sensor.%', 'sensor.%', '%"unit_of_measurement":%', '%"unit_of_measurement":%', 'group.adults', None, None, None, None, None, '1', None, None, None, None, None, None, '1')] (Background on this error at: https://sqlalche.me/e/14/e3q8)

You have an index missing on your states table. It should have been created in the v4 schema core/migration.py at 95e9bd106eb2a7077e92aa7c8653d668ebe09533 · home-assistant/core · GitHub

You can manually add it with CREATE INDEX ix_states_entity_id_last_updated ON states (entity_id, last_updated);

Thanks for that, looks like something has went awol in the upgrade :frowning:

Error
SQL query: Copy


CREATE INDEX ix_states_entity_id_last_updated ON states (entity_id, last_updated);
MySQL said: Documentation

#1034 - Index for table 'states' is corrupt; try to repair it

Looks like you MySQL server encountered a disk problem or was not shutdown cleanly at some point. If you have a backup you should restore from the last good backup. If not:

https://dev.mysql.com/doc/refman/8.0/en/rebuilding-tables.html

Thanks for your help, I just ended up dropping the states table in the end and letting it recreate on startup.

I have the same issue, but my DB is running MyISAM instead of InnoDB as it runs on a QNAP that does not support InnoDB.

As MyISAM hits the character limit

Specified key was too long; max key length is 1000 bytes

I can’t create the index that way.

However, regarding Fix migration of MySQL data when InnoDB is not being used by bdraco · Pull Request #72893 · home-assistant/core · GitHub this should not be an issue anymore.

Any ideas?

Myiasm is not going to work. You’ll hit all kinds of random failure states as the current table design will exceed the limits.

You will need to switch to innodb, and if that’s not possible, you are better of using SQLite.

i had the same issue using mariadb on qnap, i found this solution/workaround:

root cause is the limit of 1,000 chars for index keys (in this specific environment)

you can bypass the issue by shortening both columns ‘entity_id’ and ‘state’ of table ‘states’ from varchar(255) to varchar(248)

you then need to create the according index manually:
CREATE INDEX ix_states_entity_id_last_updated ON states (entity_id, last_updated)
CREATE INDEX ix_states_last_updated ON states (state, last_updated)

if still some index is missing then check this list: Database - Home Assistant

easy tasks with myphpadmin, if you need help let us now

with that change, everything incl. logbook will be working fine again

reducing the column size form 255 to 248 carries some theoretical risk, but i haven’t run into it yet

Following @bdraco’s suggestion I enabled InnoDB on the QNAP. It’s a “newer” model with 4 GB RAM and a Celeron J1900. I did not follow @unsmart’s suggestion because future updates will most probably force me to redo his suggestion over and over. I wanted a sustainable config.

  1. Either use the Text Editor (QNAP app), SSH or SCP into the NAS
  2. navigate to /mnt/ext/opt/mariadb10/etc
  3. open mariadb.conf in your preferred editor
  4. underneath the port or at least in the [mysqld] section add:
default-storage-engine = INNODB
#default-storage-engine = MyISAM

and

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /usr/local/mysql/data
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /usr/local/mysql/data
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 512M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
#innodb_lock_wait_timeout = 50
  1. save it and restart the NAS (or restart MariaDB)
    /etc/init.d/mariadb10.sh stop & /etc/init.d/mariadb10.sh start

Your mariadb.conf may now look like somehow this:

  1. With PHPmyAdmin add a new DB called homeassistant
  2. in that DB create the user homeassistant and grant “him/her/they/them/…” all rights
  3. in HA configure the recorder to use db_url: mysql://[USER]:[PWD]@nas.yourdomain.local:3306/homeassistant?charset=utf8mb4

At the beginning I was worried about the performance impact because several threads (e.g.: How to enable INNODB as the default engine for MySQL - QNAP NAS Community Forum) mention “good reasons” why InnoDB is not enabled by default. My experience after 2 weeks: WOW.
Fast, just fast. Slightly more consumption in CPU (~5%) and RAM (~35%) but still absolutely within the specs of that NAS.

My suggestion: try to find your fitting values of buffer_pool, log_file_size and log_buffer_size, if these settings are oversized your MariaDB is prone to crash. Most probably in my case I would be able to go to innodb_buffer_pool_size = 2048M, but currently I am fine with the very fast access and I do not have SSDs in the NAS (but Hitachi enterprise class disks with 12 TB in a RAID-6)

Further reading:

And for the backup there is a script: [HOWTO] Make automatic backup of ALL MySQL / MariaSQL databases on QNAP V3.8 - Page 26 - QNAP NAS Community Forum
The end of the thread is helpful here, because OP does not maintain it anymore and ppl started adapting it for QTS5.

Anyone out there who also has 2 NAS and wants to dig into redundant DBs with proper hot-standby and failover DNS config? Not that we need it, but we could :nerd_face:
And anyone out there with other experiences regarding system variables of MariaDB on a QNAP, e.g.: innodb_additional_mem_pool_size?

Maybe this should become an own thread “how to configure MariaDB ond QTS5 (QNAP) as recorder DB”, what do you think?

This post is provided as it is, no warranty/liability, if your NAS becomes a brick, starts a BBQ in your rack or kills all WAF with the fans you hopefully took a good backup and won’t even dare to blame me

2 Likes