2023.4: Custom template macros, and many more new entity dialogs!

Thanks. Do you know if there is a regex_replace like filter that is applied to all elements of a list?

Example: I am looking for a filter to change:
['Light One', 'Light Two', 'Light Three']
to:
['One', 'Two', 'Three']

with a filter like {{ list | regex_index_replace('^Light ','') }}

in the template editor i get
TemplateAssertionError: No filter named 'regex_index_replace'.
if i do this

{% set list = ['Light One', 'Light Two', 'Light Three'] %}

{{ list | regex_index_replace('^Light ','') }}

I know. I just made the filter name regex_index_replace up.
I am looking for a filter like that, but dont know how it is called.
I just wished there was something like that.

2 ways, covered in the regex section

{% set data = ['Light One', 'Light Two', 'Light Three'] %}
{{ data | map('regex_findall', '^Light (.*)') | map('first') | list }}
{{ data | map('regex_replace', '^Light ', '') | list }}

Keep in mind that the HA documentation assumes you know Jinja. So map (a Jinja native filter) accepts a filter as the first argument. The remaining arguments or keyword arguments are supplied to the filter.

1 Like

Is this what you are searching for?

{% set my_list = ['Light One', 'Light Two', 'Light Three'] %}
{{ my_list | map('regex_replace', find='^Light ', replace='') | list }}
2 Likes

Where is binary_sensor.updater_addons gone? It seems there is now one updater per addon. It was so helpful to have one updater for ALL addions at the same time.

With this release the database connection seems somehow broken. Any tip is welcome. Log says:

2023-04-07 17:02:17.436 ERROR (Recorder) [homeassistant.components.recorder.util] Error executing query: (MySQ
Ldb.OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction')
[SQL: INSERT INTO states (entity_id, state, attributes, event_id, last_changed, last_changed_ts, last_updated,
 last_updated_ts, old_state_id, attributes_id, context_id, context_user_id, context_parent_id, origin_idx, con
text_id_bin, context_user_id_bin, context_parent_id_bin, metadata_id) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)]
[parameters: (None, '��', None, None, None, None, None, 1680879686.0760186, None, None, None, None, None, None
, None, None, None, None)]
(Background on this error at: https://sqlalche.me/e/20/e3q8)

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1964, in _exec_single_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 748, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
    res = self._query(query)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
    db.query(q)
  File "/usr/local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
    _mysql.connection.query(self, query)
MySQLdb.OperationalError: (1205, 'Lock wait timeout exceeded; try restarting transaction')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/recorder/util.py", line 129, in session_scope
    yield session
  File "/usr/src/homeassistant/homeassistant/components/recorder/auto_repairs/schema.py", line 78, in _validat
e_table_schema_supports_utf8
    session.flush()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4155, in flush
    self._flush(objects)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4291, in _flush
    with util.safe_reraise():
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
    raise exc_value.with_traceback(exc_tb)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4252, in _flush
    flush_context.execute()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 467, in execute
    rec.execute(self)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 644, in execute
    util.preloaded.orm_persistence.save_obj(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj
    _emit_insert_statements(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1184, in _emit_insert_sta
tements

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/recorder/util.py", line 129, in session_scope
    yield session
  File "/usr/src/homeassistant/homeassistant/components/recorder/auto_repairs/schema.py", line 78, in _validat
e_table_schema_supports_utf8
    session.flush()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4155, in flush
    self._flush(objects)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4291, in _flush
    with util.safe_reraise():
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
    raise exc_value.with_traceback(exc_tb)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4252, in _flush
    flush_context.execute()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 467, in execute
    rec.execute(self)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 644, in execute
    util.preloaded.orm_persistence.save_obj(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj
    _emit_insert_statements(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1184, in _emit_insert_sta
tements

And this multiple times. Any help is highly welcome.
I am using a mysql db outside of the home assistant docker container.

The card mod card needs to be updated, not sure if it is. I rolled back and I am waiting

2 Likes

Please raise a ticket in the frontend repository with your concrete points/findings. I fear that your posts here reg. screen readers will get lost.

Anyone else got the chevrons / arrows showing up without reason in the navigation bar after this release?

1 Like

There are many tools that go JSON to YAML and reverse. I do this all the time with oxygen Editor. I tend to do many things in XML and XSL and then output results to YAML

Yes, the issue has been raised in the frontend

1 Like

It’s likely something is wrong with the database itself

You can check it with

It also looks like that query doesn’t handle MySQL/MariaDB deadlocking (although it’s unexpected that it’s happening at that point). We probably need to add some retry logic there. Please open a GitHub issue for that.

The command „show innodb status“ seems not functional. What I can execute is „show status“. But there is quite a huge output, which says not much to me.
The database performance is also really bad now, every history or graph takes really long to be displayed.
Its really nearly unusable now. I had quite a good performance before :frowning:
Even after 10 min a normal simple graph is not loaded.
Restarting things also does not help.

Alexa/cloud integration also no longer working. Can no longer manage my devices via voice.

Your database is being reorganized in the background in small transactions but you have something wrong with your system that is preventing it from working correctly. It’s likely your disk is overloaded and can’t even complete a small transaction before it hits the timeout.

You need to figure out what’s going on with your database or disk and why it’s taking so long. Please carefully read the innodb status using the article I posted above and find the transaction that is blocking your database.

If you have custom sql sensors, disable them all and run each one through the query analyzer to make sure they aren’t doing full table scans before turning them back on.

If you have other apps or containers doing heavy disk IO, turn them off until the migration finishes

I really appreciate your help. I was at least able to execute a similar command called:

SHOW ENGINE INNODB STATUS;

The output is this:

=====================================
2023-04-07 17:56:22 140178410653440 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 36 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 1103906 srv_active, 0 srv_shutdown, 3514927 srv_idle
srv_master_thread log flush and writes: 0
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 3387484
OS WAIT ARRAY INFO: signal count 4379655
RW-shared spins 0, rounds 0, OS waits 0
RW-excl spins 0, rounds 0, OS waits 0
RW-sx spins 0, rounds 0, OS waits 0
Spin rounds per wait: 0.00 RW-shared, 0.00 RW-excl, 0.00 RW-sx
------------
TRANSACTIONS
------------
Trx id counter 17807838
Purge done for trx's n:o < 17807796 undo n:o < 0 state: running but idle
History list length 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 421653933931680, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 421653933929256, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 421653933930064, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 421653933928448, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 421653933927640, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 421653933926832, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 421653933926024, not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION 17807837, ACTIVE 203 sec starting index read
mysql tables in use 3, locked 3
9838 lock struct(s), heap size 1204344, 15146 row lock(s), undo log entries 3670
MySQL thread id 206, OS thread handle 140178409596672, query id 57820938 172.21.0.1 hassio updating reference tables
UPDATE states SET entity_id=NULL WHERE states.state_id IN (SELECT states_with_entity_ids.state_id 
FROM (SELECT states.state_id AS state_id 
FROM states INNER JOIN (SELECT states.state_id AS state_id_with_entity_id 
FROM states 
WHERE states.entity_id IS NOT NULL 
 LIMIT 5000) AS anon_1 ON states.state_id = anon_1.state_id_with_entity_id) AS states_with_entity_ids)
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: complete io for buf page (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
 ibuf aio reads:, log i/o's:
Pending flushes (fsync) log: 0; buffer pool: 3255
12004929 OS file reads, 111153731 OS file writes, 34063246 OS fsyncs
15.51 reads/s, 16384 avg bytes/read, 25.31 writes/s, 13.96 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 2937, seg size 2939, 1164090 merges
merged operations:
 insert 1313764, delete mark 20197701, delete 29178
discarded operations:
 insert 0, delete mark 0, delete 0
Hash table size 34679, node heap has 1 buffer(s)
Hash table size 34679, node heap has 3 buffer(s)
Hash table size 34679, node heap has 12 buffer(s)
Hash table size 34679, node heap has 1 buffer(s)
Hash table size 34679, node heap has 2 buffer(s)
Hash table size 34679, node heap has 4 buffer(s)
Hash table size 34679, node heap has 4 buffer(s)
Hash table size 34679, node heap has 109 buffer(s)
0.00 hash searches/s, 17.86 non-hash searches/s
---
LOG
---
Log sequence number          290433120995
Log buffer assigned up to    290433120995
Log buffer completed up to   290433120995
Log written up to            290433120995
Log flushed up to            290433120995
Added dirty pages up to      290433120995
Pages flushed up to          290423115067
Last checkpoint at           290423115067
Log minimum file id is       13934
Log maximum file id is       13934
49683591 log i/o's done, 3.84 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 0
Dictionary memory allocated 821996
Buffer pool size   8192
Free buffers       3
Database pages     7980
Old database pages 2931
Modified db pages  5206
Pending reads      0
Pending writes: LRU 5, flush list 0, single page 1
Pages made young 42539571, not young 534152088
0.11 youngs/s, 25.07 non-youngs/s
Pages read 11773830, created 1278388, written 47238644
15.51 reads/s, 0.05 creates/s, 15.35 writes/s
Buffer pool hit rate 832 / 1000, young-making rate 1 / 1000 not 272 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 7980, unzip_LRU len: 0
I/O sum[1378]:cur[4], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
Process ID=1, Main thread ID=140178421159680 , state=sleeping
Number of rows inserted 20771716, updated 57778967, deleted 13151073, read 2779358685
0.00 inserts/s, 4.64 updates/s, 0.00 deletes/s, 4.61 reads/s
Number of system rows inserted 27501, updated 9712, deleted 23462, read 115148
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================

I am not able to understand the article you posted. But there are no other actions running on the database or system. Only the db is having some load on my synology dsm (round about 5% cpu and ram). Disk I/O really should not be the issue as it is running a raid with 4 disks.

Looks like it’s almost done as it’s on the last part of the migration.

You could increase the db_max_retries to better handle the slow IO Recorder - Home Assistant but since it’s already on the last step it might be better to wait it out as it will get faster when it’s done

Side note: hopefully you aren’t using the old MariaDB 10.3 series that synology ships as it’s known to have performance problems. You likely would have seen a repairs issue if you are. 10.3 reaches eol in 1 month and 2 weeks (25 May 2023).

Ah perfect, thanks for the response! :slight_smile:

So I just have to wait - that sounds great.
Using mysql 8.0.32 in a docker container.
Any further performance tips for the database?