Great! some of your instructions I’m doing already whenever I do a major change configuration but this is an extensive step by step.
Currently my automation do not start by an “_”
As soon as I perform the upgrade most likely I will probably try the migration and see how it goes. Thank you.
Ok, i dont know how i would have figured this out from the error message, but i tried it and everything works fine now.
Thanks you so much!
Looks like that HA doesn’t fully support that caracter set like probably many others. Why you where expecting to be fixed in this version?
Watch out. He said -
, not _
.
One is a list, the other isn’t.
Good catch! Thank you.
You’ve given me plenty of reasons to avoid the feature, since I’d like to keep my config in packages. Thanks for the write-up.
How did you get that display?
It’s the standard Gauge Card
Try sending Bram lots of to get that feature implemented!
Anyone else seeing energy dashboard glitching during a restart?
Awesome filtering options! Thank you
As a feature request i would like to suggest adding a way to edit the default “Group by” filter. Everytime I enter the Automations page I have to change the Group by to Areas instead of Categories
It should be, it was missing a set of credentials for some (not all) users, I think depending on the app they created an account with. The missing credentials were added, plus if you have any shared appliances you should be able to see those. Let me know if you have any issues, we don’t have a ton of devices and logins to test with, but with what we were able to test with it was working fine.
Relplying to myself in case anyone was following. (I doubt it but don’t like leaving the report hanging…)
A second upgrade to .1 went fine.
Digging thru the logs on the snapshot that blew up MQTT, it was likely the age old problem of supervisor not waiting long enough for things to respond — and deciding to “repair” all the Dockers… which is well documented and generally ignored. Ha.
Cause: Shared storage was slow from other activities going on on it during the HA upgrade. Supervisor had a hissy fit as best as I can tell, and made it worse by reconfiguring every docker in the pile. Ha.
Just a known risk of running supervised I guess. It uses hard coded timeouts and has no real indication from the Docker that it’s busy or what the load average or iowait are on the hardware.
Well known issue. Throw faster hardware at it.
(In my case, avoid upgrades when shared storage is busy. Supervisor is brittle in that scenario. No big deal… my choice to use it…)
After updating to 2024.4 my voice commands on echo keeps responding with the device is not responding. Any ideas?
Yes, I have the same problem.
Now I get this error and don’t have any Homematic devices
2024-04-07 08:15:28.193 ERROR (MainThread) [homeassistant.config] Unknown error calling homematic CONFIG_SCHEMA - '<' not supported between instances of 'bool' and 'str'
2024-04-07 08:15:28.224 ERROR (MainThread) [homeassistant.setup] Setup failed for 'homematic': Invalid config.
Logger: homeassistant.config
Quelle: config.py:1564
Erstmals aufgetreten: 07:39:23 (1 Vorkommnisse)
Zuletzt protokolliert: 07:39:23
Unknown error calling homematic CONFIG_SCHEMA - '<' not supported between instances of 'bool' and 'str'
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config.py", line 1564, in async_process_component_config
return IntegrationConfigInfo(component.CONFIG_SCHEMA(config), [])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 272, in __call__
return self._compiled([], data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 595, in validate_dict
return base_validate(path, iteritems(data), out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 387, in validate_mapping
cval = cvalue(key_path, value)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 818, in validate_callable
return schema(data)
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 272, in __call__
return self._compiled([], data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 595, in validate_dict
return base_validate(path, iteritems(data), out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 387, in validate_mapping
cval = cvalue(key_path, value)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 595, in validate_dict
return base_validate(path, iteritems(data), out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 387, in validate_mapping
cval = cvalue(key_path, value)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 595, in validate_dict
return base_validate(path, iteritems(data), out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 387, in validate_mapping
cval = cvalue(key_path, value)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/schema_builder.py", line 818, in validate_callable
return schema(data)
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/voluptuous/validators.py", line 755, in __call__
or 'value must be one of {}'.format(sorted(self.container)))
^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<' not supported between instances of 'bool' and 'str'
This is my config, which has been working fine
# Homematic CCU
homematic:
interfaces:
hmip_gf:
host: 192.168.88.73
port: 2010
username: !secret hm_usr_admin
password: !secret hm_pwd_admin
resolvenames: json
hmip_gf_groups:
host: 192.168.88.73
port: 9292
username: !secret hm_usr_admin
password: !secret hm_pwd_admin
path: /groups
resolvenames: json
rf:
host: 192.168.88.73
resolvenames: json
username: !secret hm_usr_admin
password: !secret hm_pwd_admin
ip:
host: 192.168.88.73 #war 127.0.0.1
port: 2010
groups:
host: 192.168.88.73 #war 127.0.0.1
port: 9292
resolvenames: "json"
#username: !secret hm_usr_admin
#password: !secret hm_pwd_admin
path: /groups
wired:
host: 192.168.88.73
port: 2000
resolvenames: jsonp
username: !secret hm_usr_admin
password: !secret hm_pwd_admin
hosts:
gf_ccu3:
host: 192.168.88.73
username: !secret hm_usr_admin
password: !secret hm_pwd_admin
Was there an intentional change in the behavior of the timer integration? Previously the state of the timer would always change to idle before the timer.finished event fired, now it always changes after the event fires. This breaks some conditional logic i was using in some automations.
Have no idea is my issue related to .4 or not.
Got a very crucial issue while attempting to test another issue.
A whole storage-mode dashboard was replaced by another storage-mode dashboard - i.e. I got 2 (almost) same dashboards:
- Assume dashboard_1 has 20 views, dashboard_2 has 10 views.
- You need to move card_X from some view on dashboard_1 to some view of_dashboard_2.
- Result: dashboard_2 becomes SAME as dashboard_1 - only this card_X is a difference.
And a json file for this lost dashboard is overwritten with a content of another dashboard.
What I tried to do:
- Opened that json file of the overwritten dashboard.
- Pasted a content of the same file from a backup.
- Cleared a cache just in case.
- But still the dashboard is shown as a copy of another dashboard.
- Fixed only after rebooting HA.
Consider this as a quiet alarm for those who prefer to keep everything in UI & convince others.
Same here! How to fix?
After a front page update the page was working again