Honeywell CH/DHW via RF - evohome, sundial, hometronics, chronotherm

I’m using an Nuc and not a Pi.

System Health

version: core-2021.12.9
installation_type: Home Assistant OS
dev: false
hassio: true
docker: true
user: root
virtualenv: false
python_version: 3.9.7
os_name: Linux
os_version: 5.10.88
arch: x86_64
timezone: Europe/London


GitHub API: ok
Github API Calls Remaining: 4798
Installed Version: 1.19.3
Stage: running
Available Repositories: 1001
Downloaded Repositories: 15


logged_in: true
subscription_expiration: 11 February 2022, 00:00
relayer_connected: true
remote_enabled: true
remote_connected: true
alexa_enabled: true
google_enabled: false
remote_server: eu-west-2-1.ui.nabu.casa
can_reach_cert_server: ok
can_reach_cloud_auth: ok
can_reach_cloud: failed to load: timeout


host_os: Home Assistant OS 7.1
update_channel: stable
supervisor_version: supervisor-2021.12.2
docker_version: 20.10.9
disk_total: 55.2 GB
disk_used: 5.5 GB
healthy: true
supported: true
board: generic-x86-64
supervisor_api: ok
version_api: ok
installed_addons: Hass.io Google Drive Backup (0.99.0), File editor (5.3.3), Terminal & SSH (9.3.0), SQLite Web (3.2.0), Z-Wave JS (0.1.52), deCONZ (6.11.1)


dashboards: 2
resources: 5
views: 8
mode: storage


api_endpoint_reachable: ok


logged_in: false
added_devices: 2

It can do - the MoBo manufacturer could have implemented it that way - if that’s the case on a Pi - dunno.

@hamba Your system froze when doing storage IO - I have not changed that code for a long while.

I’ve done as you asked, and it froze around 17:42

I have these in the supervisor logs from the same time as it froze.

22-01-16 17:43:23 ERROR (MainThread) [supervisor.homeassistant.api] Error on call http://172.30.32.1:8123/api/config: 
22-01-16 17:44:59 ERROR (MainThread) [supervisor.homeassistant.api] Error on call http://172.30.32.1:8123/api/config: 
22-01-16 17:45:09 ERROR (MainThread) [supervisor.homeassistant.api] Error on call http://172.30.32.1:8123/api/config: 
22-01-16 17:45:09 WARNING (MainThread) [supervisor.misc.tasks] Watchdog miss API response from Home Assistant
22-01-16 17:47:35 ERROR (MainThread) [supervisor.homeassistant.api] Error on call http://172.30.32.1:8123/api/config: 
22-01-16 17:47:40 ERROR (MainThread) [supervisor.homeassistant.api] Error on call http://172.30.32.1:8123/api/config: 
22-01-16 17:47:40 ERROR (MainThread) [supervisor.misc.tasks] Watchdog found a problem with Home Assistant API!
22-01-16 17:47:40 INFO (SyncWorker_3) [supervisor.docker.interface] Restarting ghcr.io/home-assistant/generic-x86-64-homeassistant

can you please check your log when its set to info.
I saw this line as it crashed

2022-01-16 17:39:50 INFO (MainThread) [ramses_rf] ENGINE: Saving schema/state...

Anyone able to run 0.17.10 with Hass OS 7.0?

I’m running 0.17.10 on a NUC on the latest version of the supervisor (HA OS 7.1) and all seems to be running smoothly on my side.

System Health

version: core-2021.12.9
installation_type: Home Assistant OS
dev: false
hassio: true
docker: true
user: root
virtualenv: false
python_version: 3.9.7
os_name: Linux
os_version: 5.10.88
arch: x86_64
timezone: Europe/Amsterdam


GitHub API: ok
Github API Calls Remaining: 4992
Installed Version: 1.19.3
Stage: running
Available Repositories: 994
Downloaded Repositories: 3


logged_in: false
can_reach_cert_server: ok
can_reach_cloud_auth: ok
can_reach_cloud: ok


host_os: Home Assistant OS 7.1
update_channel: stable
supervisor_version: supervisor-2021.12.2
docker_version: 20.10.9
disk_total: 228.5 GB
disk_used: 63.6 GB
healthy: true
supported: true
board: generic-x86-64
supervisor_api: ok
version_api: ok
installed_addons: Samba share (9.5.1), Log Viewer (0.12.1), Zigbee2mqtt (1.18.1-1), Terminal & SSH (9.3.0), UniFi Network Application (1.1.4), Grafana (7.4.0), InfluxDB (4.3.0), Mosquitto broker (6.0.1), ESPHome (2021.12.3), AppDaemon 4 (0.8.0), zigbee2mqttassistant (0.3.157), Z-Wave JS (0.1.52)


dashboards: 1
resources: 9
views: 14
mode: storage


api_endpoint_reachable: ok

Please use the following logging:

logger:
  default: warn  # or: info
  logs:
    homeassistant.core: debug
    homeassistant.helpers.entity: info

    ramses_rf: info
    ramses_rf.message: info

The ramses_rf: info will show up these lines:

2022-01-16 17:22:42 INFO (MainThread) [ramses_rf] ENGINE: Saving schema/state...

I’ve managed to downgrade to ha os 7.0 and upgraded to v0.17.10

Lets see how that goes

nope, same problem on ha os 7.0.
I’m switching back to 0.17.6 for tonight and will try tomorrow again

So,

  • 0.17.x is considered ‘stable’ and any changes will be bugfixes only (0.17.11 includes all known fixes to date)
  • 0.18.x is pre-release - it seeks to make UFH a first class citizen (WIP), and provide other enhancements, like window_open sensors for zones

@cinnamon (and others with UFH), 0.18.x is for you.

Upgraded from 0.15 to 0.17.11 everything looks good.
renamed the new entities en removed the old ones.

I think that i can disable these entities with my on/off system with an BDR91 or not?
binary_sensor.13_189740_bit_3_7
binary_sensor.13_189740_bit_6_6
binary_sensor.13_189740_ch_active
binary_sensor.13_189740_ch_enabled
binary_sensor.13_189740_dhw_active
binary_sensor.13_189740_flame_active

This data definitely comes from the BDR91 - but I expect that it will all be static… I just don’t know fro edge cases that I cannot test, like a BDR91T with a heat pump (which is why I have left it in, remember: the protocol is not documented anywhere).

What I’d do is - wait 24h, and if a binary_sensor / sensor it hasn’t changed in that time, then its reasonably safe to disable it - this is especially true for the above entities.

1 Like

Morning,

Good news, I’ve updated both of my rpi, one running stable and the other dev and both are still online, no more weird reboots.

How weird is that.

Hey David,

I see these popping up every few days, too often to believe they are just corrupt packets and always the same content. Its the ‘08’ at the end where the parser expects ‘00’ or ‘01’. Not important as it doesn’t affect functionality, but weird anyway.

 Logger: ramses_rf.protocol.message
Source: /srv/homeassistant/lib/python3.9/site-packages/ramses_rf/protocol/message.py:398
First occurred: January 17, 2022, 11:22:59 PM (1 occurrences)
Last logged: January 17, 2022, 11:22:59 PM
I --- 04:177718 --:------ 01:201047 1060 003 00FF08 < Corrupt payload: Payload doesn't match '^0[0-9A-F](FF|[0-9A-F]{2})0[01]$': 00FF08

I’ve just updated to 18.1 and been up around 40 minutes and no reboots.

I have never seen these end with a 08 before…

  • 00 means battery low
  • 01 means battery OK

Often with these binary fields, any non-00 can be the same as 01… I’ve never seen these before, is all.

Or it’s a bunch of bits, and 0b00001000 means something special… Perhaps replace the battery with a new one & see what happens - let me know.

Is it only ever that same TRV, 04:177718?

Just had a look - I have a lot of packet logs here… and not any 1060s ending in anything other than 00 or 01.

Has anyone with UFH tried 0.18.1?

It should include zone heat demand - I wouldn’t know for 100%, as I don’t have UFH.

Can I have reports on that please - packet logs would be useful.

Thought I’ll share my automation for giving the stored hot water a boost when the temperature drops below 46C

id: '1642243052302'
alias: Hot Water
description: ''
trigger:
  - platform: numeric_state
    below: '46'
    entity_id: sensor.07_053048_temperature
condition:
  - condition: state
    entity_id: water_heater.stored_hw
    state: 'off'
action:
  - service: water_heater.set_operation_mode
    data:
      operation_mode: boost
    target:
      entity_id: water_heater.stored_hw
mode: single

This has been one of the best bits HA does and it keeps the wife very happy because she is the only one that will use enough hot water out side the schedule and will then want to have a shower before the next scheduled ON.
As you can imagine, this in itself has saved many grumpy days.

Thanks David for all your hard work and making our lives a little easier.

2 Likes

I’m getting lots of these with 18.1

Logger: ramses_rf.protocol.protocol
Source: /usr/local/lib/python3.9/site-packages/ramses_rf/protocol/protocol.py:209
First occurred: 12:02:28 (4450 occurrences)
Last logged: 17:54:30

RP --- 01:169176 18:135447 --:------ 0006 004 000502C8 < exception from app layer: process_message() got an unexpected keyword argument 'prev_msg'
RQ --- 18:135447 01:169176 --:------ 0418 003 000000 < exception from app layer: process_message() got an unexpected keyword argument 'prev_msg'
RP --- 01:169176 18:135447 --:------ 0418 022 004000B00601010000001816BBF77FFFFF70000C0004 < exception from app layer: process_message() got an unexpected keyword argument 'prev_msg'
RQ --- 18:135447 01:169176 --:------ 2E04 001 FF < exception from app layer: process_message() got an unexpected keyword argument 'prev_msg'
RP --- 01:169176 18:135447 --:------ 2E04 008 00FFFFFFFFFFFF00 < exception from app layer: process_message() got an unexpected keyword argument 'prev_msg'
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/protocol.py", line 194, in _pkt_receiver
    p.data_received(msg)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/protocol.py", line 475, in data_received
    self._callback(self._this_msg, prev_msg=self._prev_msg)
TypeError: process_message() got an unexpected keyword argument 'prev_msg'

Well, that’s a bug… it will be inboth 0.17.11 and 0.18.1.

Do you have the following?

evohome_cc:
  advanced_features:
    message_events: true

If so, I’d be curious what you’re using it for?

In the meantime, I’ve fixed that bug. I won’t rush to release this - but would do if anyone was using it (let me know) - just disable the above feature until then