Honeywell CH/DHW via RF - evohome, sundial, hometronics, chronotherm

Completely forgot the batteries are under the removable faceplate on the front… removing that, then the batteries… has cleared the fault log and it’s back to ‘ok’ state.

Thx a lot @zxdavb and @iMiMx!
Can i use this automation to update the temperature?

Do i need update the sensor every couple of minutes?

It updates on state change, rather than per X minutes:

- id: 'Utility Evohome Faked Sensor'
  alias: Utility Evohome Faked Sensor
  mode: single
  trigger:
    - platform: state
      entity_id: sensor.ble_temperature_laundry
  action:
    - service: evohome_cc.put_zone_temp
      data:
        entity_id: climate.utility
        temperature: '{{ states("sensor.ble_temperature_laundry") | float }}'
    - delay: 00:00:01
    - service: homeassistant.update_entity
      data: {}
      target:
        entity_id: sensor.03_055510_temperature
    - service: homeassistant.update_entity
      data: {}
      target:
        entity_id: climate.utility
1 Like

Thx i will use that!

Error doing job: Exception in callback SerialTransport._read_ready()

I haven’t exactly identified this bug, but a ‘fix’ is coming.

I recently received my SSM-D in the post and hooked it up to a new HA instance running on a Pi 4. I am only interested in reading data from my Evohome system and this seemed like a good way to do so, even if the ambitions behind this project are far beyond my planned usage. (I have a main HA instance running on a VMware Cluster with other Pis enabling links into, for example, my Z-Wave network. I was looking at using MQTT to push data from evohome_cc into the main HA instance.)
All seemed to work well with lots of intesting information available from my OTB around boiler function as well as demand info for Zones.
However, each evening for the three evenings after I got this set up, my UFH controller (HCE80) went into error with a flashing red light and all zone lights flashing yellow. This required a power cycle to clear. Yesterday I shutdown the HA instance on the Pi and all seems well again. Clearly this may just be coincidence but I suspect not.
Does this integration do some polling which may interfere with the HCE80? If so, what would be the trigger? In the first instance, I am keen to understand if the integration could be responsible. Once I understand that, I will think about next steps.
I was really pleased with all the metrics that were being collected so I’d like to get that back but not if it risks destabilising my Evohome system.
(I can’t provide versions or logs as the Pi is off but it was all newly installed as of last Friday. The SSM-D was received last week and I didn’t do anything with the firmware it was supplied with.)

(please indicate the version you are using)

The short answer from me is that I confident that ramses_rf is not (cannot be) causing this issue.

However, you certainly provide food for thought.

I presume your HCE80 worked flawlessly for some good period of time before you got this up & running, and then promptly ‘crashed’ every evening since.

I’ll work on the basis you’re finding that the HCE80 subsequently behaves as expected, now that ramses_rf is not polling it?

No, it polls (sends a periodic query) for 0005, 000C, 10E0, 1FC9, 000A, 2309), and eavesdrops (listens for, e.g. 0008, 22C9, 3150). Obviously eavesdropping won’t cause any issues.

The codes that it sends an RQ for are very well understood (but they are not fully understood). Importantly, none of them ask a device to modify it behaviour (that would be a W or a I, not an RQ).

In addition, I note all those are also polled against the Controller (with no issues in the last 2 years).

But I have not done a lot of testing against HCE80s, and I have no idea what their firmware will do. However, no others have reported a similar issue (@others: please report, if so).

A big issues presently, especially for people with an OTB (Opentherm bridge, R8810A/R8820A), is that ramses_rf is excessively chatty - this can cause a lot of corrupt packets with collisions. with other systems using the 868MHz band

The above will be addressed shortly - possibly the above could have a part to play, but it simply wouldn’t fit the pattern you describe.

I agree completely, but I cannot help you get to the bottom of it without a packet log. If there is an answer, it may well be in there.

For now, you can consider starting ramses_rf in a listen-only mode:

evohome_cc:
  ramses_rf:
    # disable_discovery: true
    disable_sending: true
    # enable_eavesdrop: true    

…and see where that takes you.

You will miss out on some stuff, but this is a good first step. If that is OK, enable eavesdropping, then stop disabling sending, but disable discovery.

The other thing you can do is remove all the above, and simply block (denylist, blocklist) the HCE80 so that ramses_rf can’t see it as a device:

evohome_cc:
  # enforce_known_list: true
  block_list:
    - 02:123456

Note that using a known_list (allowlist, acceptlist) is preferred, and is strongly recommended in any case.

evohome_cc:
  enforce_known_list: true
  known_list:
    - 01:123456
    - 10:123456
   ...

FWIW, I’ll add that there is an evoGateway integration for HA, that leverages ramses_rf and pushes MQTT (although I understand it uses a significantly older version of ramses_rf, 0.14.24).

Thank you for the quick and detailed response. I confirm that the HCE80 had been working without issue for around two weeks, experienced errors daily (once per day) for 3 days after I introduced a new HA instance with evohome_cc and has then been fine for 1 day. A small sample size but with 100% correlation. The UFH is new, the OpenTherm Bridge is newish (around 2 months) and the bulk of the rest of the Evohome has been running in various forms for many years.

I will leave the Pi off for a couple of days and then switch back on. I will only alter the config if required for collecting logs. I will then monitor to see if the error occurs.

I will then alter the config to restrict evohome_cc to my devices and to specify the controller, and then monitor again. (There was only one unknown device found previously: a wireless thermostat.)

I will then alter the config to disable sending, and then monitor again.

I’ll report back as soon as I have something useful to report.

Yes - you’ve got a lot to be getting on with.

Note: the latest 0.17.x should be more reliable (less buggy), but 0.18.x has a lot of UFH-specific improvements, albeit a WIP.

For everyone with an OpenTherm bridge (OTB, R8810A, R8820A): I need some help.

You may have noticed two copies of certain sensors (we’ll ignore binary sensors for now), e.g.:

  • sensor.10_048122_ot_boiler_output_temp, and
  • sensor.10_048122_boiler_output_temp (NB: no _ot)

Attached is a screengrab of three such sensors, graphed in pairs:

You can see - except for sampling timings, they are identical.

Can everyone confirm that all these pairs have the same value?

  • boiler_output_temp
  • boiler_return_temp
  • boiler_setpoint
  • ch_max_setpoint
  • ch_water_pressure
  • dhw_flow_rate
  • dhw_setpoint
  • dhw_temp
  • outside_temp
  • rel_modulation_level

If not, please elaborate for me what you have got, preferably with a packet log).

The plan is to ditch one of each pair, to reduce RF congestion - unless I hear otherwise, I may ditch the one that works for you in favour of the one that doesn’t!

Values are consistent for me between OT and non-OT sensors. Only difference I see is the precision.

For example OT ch_water_pressure is to 1d.p. whereas the non-OT version is to 2d.p. Conversely for OT dhw_temp I see 2d.p. whereas non-OT I see 0d.p.

So whilst a bit bored on a work call, I switched on the Pi to check versions and grab the logs:

HACS 1.19.3
Evohome RF 0.17.13

Although I have read the top and tail of this thread, I’m missing your preferred method to receive the packet.log files; please forgive and enlighten me.

Since switching off the Pi there have been no further instances of the UFH controller going into an error state. But I have switched the Pi back off again for now in case the logs can shed any light on whether anything is amiss.

I have also just read through “Notes on the Schema” (GitHub zxdavb/ramses_rf) which mentions “no simple means to learn the sensor for a zone, when that sensor is the controller”. The sensor for one of the UFH Zones (Hall) is the Controller; could this be relevant?

No.

(1234567890)

@jonboy and anyone else with UFH - please use teh latest version of 0.18.x (the pre-release/beta stream) that you can.

Please report any bugs you find, and provide any packet logs you can.

There are more improvements to come.

I’m very happy to help test and debug. To do so I need some pointers unfortunately:

  1. How do I get my logs to you?

  2. What settings would be most helpful for you for capturing logs for the various Evohome components? (I have logger using warning by default.)

  3. Can you point me to instructions on how to switch to the 0.18 stream? (I haven’t yet looked at this and so apologies if it’s obvious.

I’ll figure it out myself eventually, I hope, but I’ve tried to find a reference for sending you the logs and am failing so far.

Thanks in advance.

Hi, firstly thanks for this, i’m finding the data really interesting!

I wanted to sanity check some assumptions for my specific use case.

I have the SSM-D connected to a raspberry pi running HASS OS, it’s working great, but I have a couple of challenges:

  1. I have two boilers supplying the house, and therefore two controllers with their own DHW and actuators/zone valves/thermostats. I’ve specified one as my controller and have the correct schema for that controller. I see devices for the other controller, but they’re not associated with anything useful.
  2. The devices for one controller are quite spread out and I find i’m not getting reliable visibility of messages from some of the further devices, with the raspberry pi placed close to the controller, which does seem to receive these messages (although a few show poor signal on an RF comms check)

Ideally, i’d like to have both schemas visible in home assistant, and also improve the reliability of receiving messages from devices on the periphery.

Could I achieve this by having another raspberry pi with an SSM-D, with one instance tied to one controller and one to the other, spaced apart in the house so they both cover the end of the house nearest to their respective controller? My understanding is that they’d both listen and receive messages from anything in their allowed list, regardless of whether they’re in the schema for the controller defined, so I’m hoping this would allow for more complete visibility of all devices, and also allow both schemas to be built on the separate instances.

I assume i’d need to use something like remote home-assistant (GitHub - custom-components/remote_homeassistant: Links multiple home-assistant instances together) to then collate the events and devices into a single view.

Should this work? And would it address both issues I’ve got currently?

Thanks in advance for any advice!

So finally something is working again… Took me some time to get it up and running. Fore sure I thought there would be someting softwarewise bu as it turned out (I do not quite understand what was going on) it was a combination of things related to the HGI80. That is some nasty piece of electronics… I ordered a NanoCul this week since I think that is going to be (hopefully) more stable

  • Saw that my items were failing, so to say not updating
  • In the errorlog of HASS strangely I spotted nothing really strange, it seemed to be doing its thing
  • But still my items in HASS were not updating
  • Saw that the packetlog was also frozen, meaning no packeges were recorded (still really dont understand this in relation to the HASS logfile which showed decoding of messages)
  • Restored older versions to see if this would help, no to avail. Still freezing
  • In the mean time it seemed to freeze earlier then before, no idea why
  • Suddenly I realized that the power failure I had some time ago
  • I have a UPS feeding my network stuff, but since the NAS had shut itself down because that is what it does with regards to safety
  • BUT the VMs that run on this NAS (still havent fixed this) were inproperly shut down (need to install a NUT client that talks to the Synology → my opinion Synology you could have done a better job! Just give the user a interface to this to safely shutdown the NAS…)
  • The VMs were (phew…) running fine so was thinking it is fixed.
  • But as it turned out, the HGI has not been powerless, that was this first thing I did to proper reset that tricky thing
  • After this reset it would not identify my HGI80 anymore (no device allocated) why??
  • It would ONLY come to life after I had found (was a bit of a search) of the specific ti_3410.fw file and restored this to the firmware folder. (for the one needing this fw file please contact me I can share)
  • As it turned out this fw file had never been there, how on earth could the HGI have run still puzzles me.

Now it is running the last hours (will give it some more time) and I have trust it will not stall anymore. This HGI is a tricky piece… Since all my other (FTDI / Zwave stuff) runs without a problem… See if the Nanocul is more stable. The only thing I see (running 0.8.12) is below: just once:

2022-01-27 15:08:59 ERROR (MainThread) [homeassistant] Error doing job: Exception in callback SerialTransport._read_ready()
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/usr/local/lib/python3.9/site-packages/serial_asyncio/__init__.py", line 119, in _read_ready
    self._protocol.data_received(data)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/transport.py", line 477, in data_received
    self._line_received(dtm, _normalise(_str(raw_line)), raw_line)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/transport.py", line 460, in _line_received
    self._pkt_received(pkt)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/transport.py", line 704, in _pkt_received
    self._qos_received(pkt)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/transport.py", line 762, in _qos_received
    logger_rcvd(msg, wanted=wanted)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/transport.py", line 723, in logger_rcvd
    pkt._hdr or str(pkt),
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/frame.py", line 353, in _hdr
    self._hdr_ = pkt_header(self)
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/frame.py", line 480, in pkt_header
    return f"{header}|{pkt._ctx}" if isinstance(pkt._ctx, str) else header
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/frame.py", line 342, in _ctx
    self._ctx_ = self._idx
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/frame.py", line 364, in _idx
    self._idx_ = _pkt_idx(self) or False
  File "/usr/local/lib/python3.9/site-packages/ramses_rf/protocol/frame.py", line 439, in _pkt_idx
    raise InvalidPayloadError(
ramses_rf.protocol.exceptions.InvalidPayloadError: Corrupt payload: Packet idx is 01, but expecting no idx (00)

Hi, I have 2 controllers off one boiler using 2 x raspberry pi, they work well in my case, I’ve got a main hassos VM and using remote_homeassistant to pushh the evohome sensors and stuff into my main hassos.
I can’t help much with the placement of the pi’s though both of mine are next to each other with one controller on the other side of a wall and the other controller down stairs
The only thing I would say is if you want to use any automations, use the rpi that is for that controller.

1 Like

boiler_output_temp - same
boiler_return_temp - not available
boiler_setpoint - same
ch_max_setpoint - same
ch_water_pressure - different decimal places but similar data
dhw_flow_rate - not available (system boiler)
dhw_setpoint - with _OT has correct value, without is “Unavailable”
dhw_temp - not available (but may only be available during a reheat which is infrequent)
outside_temp - not available (no sensor)
rel_modulation_level - same

Thanks, it’s helpful to know that general setup works.

This has now made me wonder whether I could use such a set up to effectively relay messages, by faking them on the instance nearest to the controller they’re bound to, and helping to improve the reliability of those signals reaching the controller they’re intended for. If that would work, I could site the two evohome controllers differently and enhance the coverage they get by using the two evohome_cc instances to relay messages from more distant devices.

I guess to do this i’d need to have a carefully defined known_list on each instance, so that they’d only listen to messages from the devices local to them, and then fake those devices bound to the far controller on the instance nearest to it.

I’ve no idea if this is feasible, I wonder if it could be?