Honeywell CH/DHW via RF - evohome, sundial, hometronics, chronotherm

I’ve added the below to my configuration file and now seem to have lots of messages in my home assistant logs, is that where I find what your looking for?

  default: warn  # prefer warn over info, avoid debug
    # homeassistant.core: debug  # to see: Event state_changed, or not
    # homeassistant.loader: info  # You are using a custom integration for evohome_cc...
    # homeassistant.setup: info  # Setting up evohome_cc

    # custom_components.evohome_cc: info  # use info for Schema =

    # ramses_rf: debug  # for engine state
    ramses_rf.message: info   # for MSGs received (incl. sent & subsequently echo'd)
    # ramses_rf.protocol: warn
    # ramses_rf.protocol.protocol: info  # for PKTs sent (excl. retries) & received
    # ramses_rf.protocol.transport: info   # for RF Tx & Rx

    # ramses_rf.devices: debug
    # debug
    # ramses_rf.zones: debug

The file I am looking fro is this one:

  packet_log: /home/dbonnes/home-assistant/.config/packet.log

How you get a copy of it to me depends upon what “Installation Type” of HA you are running. For example, for “Home Assistant OS”, I use the VS code add-on.

Thanks, I’ll try and work it out.

At the minute rebooting home assistant solves the problem for a while.

I understand some people are using ser2net - just be aware that support for this is somewhat experimental.


  serial_port: rfc2217://localhost:5001


# HGI80-like ser2net connection that allows multiple connections
connection: &con00
  accepter: telnet(rfc2217),tcp,5001
  timeout: 0
  connector: serialdev,/dev/ttyUSB0,115200n81,local
    max-connections: 3

There are known issues with other versions of ser2net, but it does work for me.

Someone asked:

Either way, my 2nd problem, the head demand for TRVs and the Controller frequently shows “unavailable” and a bit later they again turn to normal. This is for ALL my TRVs as well as the Controller, no matter the distrance to my nanocul. Correspondingly, their graphs shows gaps, see pictures. And funny enough, all temperatures from all the STAs never show “unavailable”, their graphs have no gaps

Do you have any idea why is going on? Is there something I’m doing wrong in the configuration?

The answer is…

I have implemented ‘timeouts’ for packets, so that stale data is ignored.

An evohome system will spontaneously send I packets for all sorts of state data, and ramses_rf will augment that by sending RQ packets - the corresponding RP packets become part of the state data too.

If you stop ramses_rf from sending these RQ packets, or if there is another reason why they are not sent, then it is more likely that that state data will expire - you will see that sensor will become unavailable.

I’ve got the packet logs working but it’s too big to send to you?

dropped packets are a normal thing (RAMSES II has no QoS) - and worse, the controller can pick up a packet that ramses_rf doesn’t, and vice-versa

Here is an example:

~/c/ramses_rf (master) [0|1]> cat packet_2021010-08.log | grep '069003.* 3150 ' | python parse


00:40:59.030 || TRV:069003 | CTL:197498 |  I | heat_demand      |  07  || {'zone_idx': '07', 'heat_demand': 0.0}
01:20:57.995  I --- 04:069003 --:------ 01:197498 3150 002 0700 # has expired (200%)
01:20:57.995 || TRV:069003 | CTL:197498 |  I | heat_demand      |  07  || {'zone_idx': '07', 'heat_demand': 0.0}
02:20:57.473  I --- 04:069003 --:------ 01:197498 3150 002 0700 # has expired (300%)
02:20:57.473 || TRV:069003 | CTL:197498 |  I | heat_demand      |  07  || {'zone_idx': '07', 'heat_demand': 0.0}
02:40:57.574 || TRV:069003 | CTL:197498 |  I | heat_demand      |  07  || {'zone_idx': '07', 'heat_demand': 0.0}
03:00:56.158 || TRV:069003 | CTL:197498 |  I | heat_demand      |  07  || {'zone_idx': '07', 'heat_demand': 0.0}

These 3150 packets should be sent at least every 20 minutes.

After the first packet was received at 00:40:59, the next packet was never received at approx 01:00:00), so the first packet expired before the 3rd packet was received at 01:20:57

Note (things have been simplified, above), the actual expiry of the packet would be as follows:

01:03:15.718  I --- 04:069003 --:------ 01:197498 3150 002 0700 # has expired (111%)

That is, packets are expired at 110%, not 200% and so this particular heat demand would be unavailable from 01:03:15 to 01:20:57.

Place into file share website somewhere, and PM me the link.

@zxdavb i think that a fan entity the right kind of entity is for an itho fan. What are the benefits of a climate entity? What do you need for kind of logging?

I’ve got a nuaire DRI-ECO-LINK-HC PIV with
DRI-ECO-2s switch
DRI-ECO-RH sensor
DRI-ECO-C02 sensor

Would like to be able to use the boost function through automation.

TBH, I think an Itho ventilation system is more Climate than Fan - have a look at these two links:

Would you expect it to be implemented as a Fan, or Climate entity, or both?

For example, Climate includes humidity, and temperature, whereas a Fan does not.

However, a Climate entity doesn’t really have fan speed like the Fan entity does.

For now, just a packet log where the Itho devices are not filtered out. It will help if you use the switch a few times.

@stevieb12345 See the above for you - the log you sent me has the PIV in it, but neither sensor, nor the switch.

greyed out

what am I missing
the controls get greyed out
this is the log message

Logger: ramses_rf.protocol.message
Source: /usr/local/lib/python3.9/site-packages/ramses_rf/protocol/
First occurred: October 9, 2021, 10:55:49 PM (463 occurrences)
Last logged: 1:20:50 PM

I --- 01:062035 --:------ 01:062035 0009 003 FC00FF # has expired (187%)
I --- 04:024680 --:------ 01:062035 2309 003 000320 # has expired (127%)
I --- 04:258712 --:------ 01:062035 2309 003 057EFF # has expired (135%)
I --- 01:062035 --:------ 01:062035 0009 003 FC00FF # has expired (196%)
RP --- 10:032432 01:062035 --:------ 3220 005 0040130000 # has expired (167%)

I have PMed you the packet.log
It gets fixed with a restart but comes back again

Many Thanks

Generally, you can ignore these expired messages. Do you have any other errors elsewhere?

@Mahmoud-Eid Please use:

  enforce_known_list: true

I cannot recommend strongly enough for people to do this.

See: 4. Config (reliability) · zxdavb/evohome_cc Wiki (

1 Like

Thank you
I must have missed that
thank you again for such amazing work

I now have a few extra entities.

30:079129 (boost_timer) Doesn’t work.
30:079129 (fan_rate) always unavailable
30:079129 (relative_humidity) readings coming through ok.

DRI-ECO-RH sensor
32:168240 (relative_humidity) sending readings to 30:079129 ok
32:168240 (temperature) works ok

DRI-ECO-2s switch?
32:166025 (relative_humidity) always unavailable
32:166025_temperature always unavailable

DRI-ECO-2s switch?
32:172522 (relative_humidity) always unavailable
32:172522 (temperature) always unavailable

I’ll need to see a packet log to sort this out, I’m afraid (I think you’ve PM’d me one).

So, here is the challenge for Itho & Nuaire HVAC systems…

With Honeywell central heating / hotwater (CH/DHW) systems, you can tell what type of device you’re dealing with by looking at teh first part of it’s name, for example:

  • 13:123456 starts with 13, and so is a BDR91 (BDR91A or BDR91T)
  • 10:123456 starts with 10, and so is a OTB (R8810, or R8820)

With the ventilation system, that does not appear to be the case - devices need to be fingerprinted by what they’re saying, rather than their device id.

This necessitates a change in the code that will be a bit of an architectural change…

Please bear with me.

@stevieb12345 Please send another log, but where the switch has been used.

32:168240 humidity sensor
32:166025 CO2 sensor
32:172522 switch?

06:46:26.546 ||  32:166025 | NUL:------ |  I | device_info      |      || {'unknown_0': '0001C85701016CFFFF', 'date_2': '0000-00-00', 'date_1': '2016-06-17', 'description': 'VMS-23C33',    '_unknown_1': '00000000000000000000'}
14:20:44.479 ||  30:079129 | NUL:------ |  I | device_info      |      || {'unknown_0': '0001C90011006CFEFF', 'date_2': '0000-00-00', 'date_1': '2016-09-09', 'description': 'BRDG-02JAS01', '_unknown_1': '00000000000000'}
14:21:14.518 ||  32:168240 | NUL:------ |  I | device_info      |      || {'unknown_0': '0001C85803016CFFFF', 'date_2': '0000-00-00', 'date_1': '2016-09-12', 'description': 'VMS-23HB33',   '_unknown_1': '000000000000000000'}

I would be curious what features people may be hoping for?

Please keep reporting bugs!

A little late, but as we became parents and the time for HA became a bit sparse.

I have no bugs to report everything is working very well. My OTB doesn’t seem to offer any more information then already implemented. The fake sensors work great and as it is working so well I found myself adding more zones (which is relatively easy with underfloor heating). My setup worked reasonably well, but adding zones cheap and easily makes for fine-grained control. Again thank you for all your efforts, it is really appreciated!

As for feature request I have quite some ideas and I don’t know what would be possible and what isn’t:

  • Fake BDR91 (using for example a Shelly 1 as a fake BDR-91, maybe possible to make any switch-domain device a possible ‘target’)
  • Group entities from the same Evohome device in a HA-device (so the temperature and the battery level of a thermostat would be grouped in one device)
  • More zone-info when combined with underfloor heating. Currently the climate-entities don’t show heat_demand for zones heated with under_floor heating (the attribute is always null).
  • A bit more of the above, but very likely protocol limited: make all underfloor heating zones status information available in HA. So I can see if the zone ‘open’ or ‘closed’ for all 5 or 8 (with the HCE80 extension). To take it even further: control the status of the relais (in our case it needlessly switches on/off as we have no separate water pump for the ufw.
  • End goal (but likely not possible) have the ability to completely fake a controller and make the Evohome unit obsolete (probably very few peoples cup of tea, but the Evohome controller has some quirks in my specific setup and I would love to get rid of them)