Honeywell CH/DHW via RF - evohome, sundial, hometronics, chronotherm

About the cashing: I had it off for the restart after installing the new version as I thought this was required. Didn’t want to restart too soon after the initial reboot, so had it still set to False, changed it now.

I’ll have a look at what fan sensors have completely disappeared and/or try to “force” them by using the remote for instance. Might be just that it was cleared from cache and hasn’t been “seen” yet. It were still very early observations though, so might be back sooner rather than later.

Too bad that it is now best practice to not add those to the dashboard. I understand it from a design perspective but I really like the autopopulated dashboard to have an overview from everything in the system. It’s my go to for troubleshooting. Also the windows and battery ones are quite useful to have.

No it’s 03:255542. It’s a thermostat type device without an option to set the temperature or for lack of a better description a room temperature sensor, made by Honeywell. It does appear to be read by ramses, as the temperature appears on the climate card just fine. The entities sensor.03_255542_temperature and binary_sensor.03_255542_battery_low however are always unavailable.

Seeing a lot of those as well, also for my fan unit.

Can you confirm your OTB (R8810A, or R8820A), and your boiler?

Can you supply the ramses_rf: section of your configuration.yaml - I am interested in the value of use_native_ot

Can you provide a packet log of 0.22.40 that provides this value? Easiest way is a 24h log.

(You have sent me logs before, but - by intention - I do not back them up, and I do not have any from you)

Please submit an issue to the repo: github.com/ramses_cc/issue

It appears this one was replaced by “sensor.37_182456_remaining_mins” most others have appeared to have come back to live Except speed cap, but this one was quite useless anyway.

I’m still seeing problems with some OTB sensors in 0.31.3. I’ve been running a series of tests on 0.31.2 using different values for use_native_ot but no obvious pattern emerging. So far I’ve seen the same behaviour on 0.31.3. Of the OTB sensors that worked reliably on 0.21.40 and 0.30.9, some are reliable on 0.31.x and some are not:

These sensors seem to work reliably on all versions:

sensor.10_064873_boiler_setpoint
binary_sensor.10_064873_ch_active
binary_sensor.10_064873_ch_enabled
sensor.10_064873_ch_setpoint
binary_sensor.10_064873_dhw_active
binary_sensor.10_064873_flame_active
sensor.10_064873_heat_demand
sensor.10_064873_rel_modulation_level
sensor.10_064873_value

These sensors seem to be unreliable on 0.31.1:

sensor.10_064873_boiler_output_temp
sensor.10_064873_ch_max_setpoint
sensor.10_064873_ch_water_pressure
binary_sensor.10_064873_dhw_enabled
sensor.10_064873_dhw_setpoint
binary_sensor.10_064873_fault_present
sensor.10_064873_percent
sensor.10_064873_value

The unreliability can either be permanently unavailable or intermittent - i.e. showing good data after a restart but becoming unavailable after a few hours, or intermittently being unavailable. Sometimes an unavailable sensor will “come back” after a restart, sometimes not.

Here’s what I’ve found so far. I’ve tried to run each configuration for at least 12 to 24 hours as it can take time for the sensor unreliability to show up. Anything marked “OK?” just means I didn’t see any unreliability in a relatively short run. I’m planning to go back to 0.30.9 for an extended run just to verify I’m getting good data on all sensors in that version.

Some examples of what I mean by “intermittent”:

And going “permanently unavailable” after switching from 0.30.9 to 0.31.1:

Happy to do further testing and provide logs.

Home Assistant Blue / SSM-D2 / Evohome colour controller / R8810A OTB / Intergas boiler

0.31.2 and 0.31.3: I’m seeing warnings in the log about every 20 minutes which seem to relate to my Honeywell DT4R thermostat (the newer, square type):

2024-01-21 10:27:54.238 WARNING (MainThread) [ramses_rf.dispatcher]  W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for src to Tx)

A section from the packet log around the same time:

2024-01-21T10:27:41.578990 000 RQ --- 18:072981 01:216136 --:------ 30C9 001 02

2024-01-21T10:27:41.616818 050 RP --- 01:216136 18:072981 --:------ 30C9 003 02024E

2024-01-21T10:27:41.617552 000 RQ --- 18:072981 01:216136 --:------ 30C9 001 07

2024-01-21T10:27:41.618030 050 RP --- 01:216136 18:072981 --:------ 30C9 003 070273

2024-01-21T10:27:41.640002 000 RQ --- 18:072981 01:216136 --:------ 30C9 001 0B

2024-01-21T10:27:41.652954 050 RP --- 01:216136 18:072981 --:------ 30C9 003 0B02F2

2024-01-21T10:27:51.854423 042 I --- 04:034692 --:------ 04:034692 30C9 003 0002C4

2024-01-21T10:27:54.236890 039 W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601

2024-01-21T10:28:01.825329 066 I --- 10:064873 --:------ 10:064873 3EF0 009 0064100A0000032200

2024-01-21T10:28:01.852302 000 RQ --- 18:072981 10:064873 --:------ 3220 005 0000000000

2024-01-21T10:28:01.872305 065 RP --- 10:064873 18:072981 --:------ 3220 005 00C0000300

2024-01-21T10:28:05.390514 039 I --- 22:012299 --:------ 22:012299 1060 003 000001

This was on 0.31.2 but I’m seeing the same warning on 0.31.3

Hi David, on 0.31.3 the 01: device active_fault binary sensor seems to be permanently unavailable. will leave it 24 hours and see what happens. Does anyone else have this as well?

Just to confirm, after rolling back to 0.30.9 all my OTB sensors look good for the first hour of running.

I will do a longer run on 0.30.9 to confirm that. This is with use_native_ot: prefer. On close examination of the history I have noticed that binary_sensor.10_064873_fault_present and binary_sensor.10_064873_dhw_enabled are periodically going unavailable for exactly 1 minute but come back each time.

Apart from the OTB sensor issues and log warnings relating to the DT4R thermostat I didn’t see any other issues on 0.31.3


It is ramses_cc.put_co2_level, can you not see it?

As per the release notes:

Retrieving fault logs is currently out of action, pending a re-write.

1 Like

This is not unexpected - it has the device type of a DTS92(A), and is treated as such by ramses_rf, but is clearly a DT4R!

This issue is caused by assumptions I didn’t even know I was making, some 3-4 years go - I have a plan to fix this, but it will require a significant re-write…

If you like, please submit this as an issue to Issues · zxdavb/ramses_rf · GitHub

removed my issue

Yes I tried ‘RAMSES RF: Announce an Indoor CO2 level’ / ramses_cc.put_co2_level`

In UI mode there is no entity found in the entity_id.

In yaml mode:

service: ramses_cc.put_co2_level
data:
  entity_id: sensor.37_097277_co2_level
  co2_level: 363

the response is “Kan service ramses_cc.put_co2_level niet aanroepen. Unknown error”

Please provide the corresponding traceback in homeassist.log… Something like:

RuntimeError: Faking is not enabled for 32:168090 (HUM)
2024-01-22 13:44:27.173 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection] [547771902272] Error handling message: Unknown error (unknown_error) David Bonnes from 172.27.0.246 (Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36)
Traceback (most recent call last):

...

  File "/usr/src/homeassistant/homeassistant/core.py", line 691, in async_run_hass_job
    hassjob.target(*args)
  File "/config/custom_components/ramses_cc/sensor.py", line 244, in async_put_indoor_humidity
    self._device.indoor_humidity = indoor_humidity / 100  # would accept None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/ramses_rf/device/hvac.py", line 122, in indoor_humidity
    raise RuntimeError(f"Faking is not enabled for {self}")
RuntimeError: Faking is not enabled for 32:168090 (HUM)

I have identified this bug. Fix coming.

In the logs:

2024-01-21 23:26:16.401 INFO (MainThread) [ramses_rf.device.base] Faking now enabled for: 37:097277 (CO2)

If needed, I am not a coder but I know how to patch.

Experiencing some other weirdness with the sending of commands with a impersonating remote of the ventilation unit. At first I hadn’t noticed the service was changed to “ramses_cc.send_command” so my automation for the fan stopped working. But after changing it to “ramses_cc.send_command” I’m seeing spotty response from the ventilation unit. Sometimes it will respond, sometimes it won’t.

I still get fairly regular occurrences of zone demand entities going unavailable (e.g. sensor.04_038777_heat_demand and sensor.01_215596_03_heat_demand) - always in pairs for specific zones so I guess it’s probably because the controller hasn’t heard a demand packet from the TRV for a while, and has therefore not transmitted a demand packet for that zone. While I understand your reasoning behind not showing ‘stale’ information if a packet certain hasn’t been received for a while, it would be nice if it was possible to adjust the timeout that it allows, and even set it to an infinite value if required. I generally only use HA for monitoring my Evohome system (save for a couple automations that adjust setpoints based on other triggers e.g. turning down the heating on days when I’m working in the office etc), so I’d rather use things such as the ‘last update time’ to indicate potential stale information in my dashboards, than having the entity show as ‘unavailable’. This issue isn’t specific to any of the recent beta versions, more of a general observation/opinion.

However recently (and I think probably since 0.31.x??) I’ve noticed that some of my climate entities have been also going into an ‘unknown’ state:
e.g. climate.ramses_cc_01_215596_03 currently shows null ‘modes’ in the attributes:

hvac_modes: heat, auto
min_temp: 5
max_temp: 25
target_temp_step: 0.1
preset_modes: none, temporary, permanent
current_temperature: 22.6
temperature: 18.5
hvac_action: idle
preset_mode: null
id: 01:215596_03
params: 
config:
  min_temp: 5
  max_temp: 25
  local_override: true
  openwindow_function: true
  multiroom_mode: false
mode: null
name: Bedroom 2

zone_idx: 03
heating_type: radiator_valve
mode: null
config: 
min_temp: 5
max_temp: 25
local_override: true
openwindow_function: true
multiroom_mode: false

schedule: null
schedule_version: null
icon: mdi:radiator
friendly_name: Bedroom 2
supported_features: 17

I’ve never noticed this happening before, so I don’t know if this is possibly a new problem. When it happens, I’ve found I can ‘correct’ it by just clicking the auto mode button on the climate entity. It doesn’t seem to happen in conjunction with the issue I mentioned above with the zone demand entities going unavailable.

Edit, sorry I forgot to add, the binary_sensor.01_215596_active_fault also goes unavailable several hours after a restart. The time it stays around for seems to vary with different versions, I had a period where it would appear for 20 minutes and then vanish for the next 5 hrs 40 mins (0.29.x was installed at that time I think??), with 0.31.2 and 0.31.3 it’s stayed around much longer, hard to spot any pattern in the history because I’d done lots of restarts over the weekend. But the entity is still going unavailable.

Updated to 0.31.3 and looks good thank you.
Maybe this is just me but the icons for my relays look swapped. I expected them to be closed when on and open when off

Screenshot 2024-01-22 at 20.52.57

I am still getting the warning that my config schema is not minimal despite you advising previously that it is minimal.

Both minors. Thanks for all your hard work.

Donald