Honeywell CH/DHW via RF - evohome, sundial, hometronics, chronotherm

No - the controller believes the faked sensor is a real device - but evohome_cc needs to know it is fakable (it’s temperature can be set).

I am sorry - I understand now that I will have caused confusion.

The cached schema is in .storage/evohome_cc.json - it is used if you have:

evohome_cc:
  restore_state: true

Your hand-crafted schema is in configuration.yaml, something like:

evohome_cc:
  schema:
    controller: 01:145038
    zones:
      07: {"sensor": "34:092243", "is_faked": true}

Tip: you can make any sensor fakable, even a real/physical device - but if you are using a real device, make sure you turn it off (take it’s batteries out).

The hand-crafted schema should really me minimal - I would limit it to:

  • controller (always)
  • OTB (if any, because they can’t be discovered)
  • older TRVs (HR80s) (because they can’t be discovered)
  • and non-evohome systems (they won’t support discovery)

Finally a schema can be constructed via discovery (recommended) & eavesdropping (avoid, especially without an allow_list).

What should happen is this:

  1. if restore_state is enabled, and a cached schema exists, use that, otherwise use the hand-crafted schema,
  2. then perform discovery (enabled by default), then eavesdrop (disabled by default)

Discovery is invoked whenever it sees a controller, Eavesdropping occurs whenever it sees a device.

I’ve read it before in this, but it didn’t occur to me that would the easiest solution. Just didn’t think of that. I’ve added this situation to the FAQ on the wiki if you don’t mind.

And I am aware that the faked sensors are very experimental. I don’t mind like to test it, it don’t need heating in that area right now. From october on I need it, if it doesn’t work I have other options to add a sensor :). But I really like the whole evohome_cc/ramses_rf package with all the possibilities out there, being less depended on the cloud for monitoring is already a great advantage.

Thx for the explanation, so in my situation where i have 3 orphans in my cached schema, i can first set in the configuration restore_state: false, reboot home assistant and let it learn with discovery and eavesdropping the schema.
after that i remove in the config restore_state: false (or set restore_state: true but i think it,s the default setting) and restart home assistant.
After this my orphans should be away and my fake sensor will be relearned because the controller thinks it’s a real sensor. Maybe i have to disable my automation that updates my fake sensors temperture until the sensor is releared.

Or is it simpeler to remove the orhans in de cached schema and restart?
change this:

                },
                "orphans": [
                    "00:110277",
                    "00:025605",
                    "00:256005"
                ]
            },

to this:

                },
                "orphans": [ ]
            },

Clearly this is a mess - I will be refactoring it all…

Theory says: if these orphans are now filtered out with an allow/block list, and you wait ‘a while’, then they will definitely end up ‘tombstoning’ and disappearing from your cached schema.

  • after that, just delete the device from HA entity registry…

If they are not filtered out then they may persist, or - more likely - others may appear.

Or is it simpeler to remove the orhans in de cached schema and restart?

That would do it, unless they’re still mentioned in the packet list.

They are mentioned in the packet list, i will wait for a while then i have a allow list, we will see if they will dissapear from the schema.
I have them already removed in home assistant that was possible after a while.

Can you show me the packets where they’re mentioned - are they less than 24h old?

1 Like

@TheMystery @DanRP @stuiow

For those who feel capable, you could try this to see if the allow_list bug persists:

In ramses_rf/transport.py, change:

return wanted and all(d.id not in self._exclude for d in pkt_addrs)

… to:

return wanted or not all(d.id not in self._exclude for d in pkt_addrs)

They are all in the older packets, 03:256005 is my fake sensor looks like it has something to do with that because they are mentioned in the same line


                "orphans": [
                    "00:110277",
                    "00:025605",
                    "00:256005"
                ]
            },

           "packets": {
                "2021-05-08T22:14:44.498121": "000  I --- 03:256005 01:155341 --:------ 1FC9 006 0023090FE805",
                "2021-05-11T02:24:06.205578": "000  I --- 00:110277 --:------ 03:256005 30C9 003 0007DA",
                "2021-05-13T16:31:22.459738": "043 RP --- 01:155341 18:203293 --:------ 0004 022 0000576F6F6E6B616D65720000000000000000000000",
                "2021-05-13T16:31:25.665968": "043 RP --- 01:155341 18:203293 --:------ 0004 022 020048616C0000000000000000000000000000000000",
                "2021-05-13T16:31:27.908187": "044 RP --- 01:155341 18:203293 --:------ 0004 022 05004761726167650000000000000000000000000000",
                "2021-05-13T16:31:28.346475": "043 RP --- 01:155341 18:203293 --:------ 0004 022 010042696A6B65756B656E0000000000000000000000",
                "2021-05-22T15:35:30.793603": "000  I --- 00:025605 --:------ 03:256005 30C9 003 00078A",
                "2021-05-25T23:28:42.409826": "000  I --- 00:256005 --:------ 03:256005 30C9 003 000758",
 

The above are simply corrupt packets - ADDR0 is corrupt, although ADDR2 is OK. These packets caused the problem, and the problem is persisting because they are not expiring…

So I will harden this code - expire all packets at max. 1 hour.

1 Like

@zxdavb Have implemented this RF method for monitoring Evohome. I’ve long been using the official cloud based integration, but very interested in excluding the cloud. I followed the instructions and all works well. Very happy. I have one question.
I have identical radiator valves in different rooms, and one reports an attribute that the other doesn’t. Is there any obvious reason why this should be? I’ve attached the attribute list for the two rads, the difference is the presence of the attribute ‘hvac_action’.
image

image

Thanks, and brilliant integration!

Sorry, behaviour appears to stay the same.

release 0.9.11 now available - good luck all

(for a while, I will concentrate on smaller bugfix releases)

You may see messages like:

2021-05-27 22:28:48 WARNING (MainThread) [ramses_rf] Creating a non-allowed device_id: 13:049798 (consider addding it to the allow_list)

Updated to 0.9.11, so far looking good all entities are available the orhans are removed from the schema.
Only the names of the zones are away on my cards in home assistant, but maybe that has to be relearned.

found 1 bug, hvac modes are away after an hour, this problem we had before.

OK, I got this.

Related to the above, and another feature I’ve yet to implement.

Just upgraded from 0.9.4 → 0.9.11 myself and spotted this:

If you click on the card, in my case the Simple Thermostat, it shows the name:

If you click on the settings, cog icon, it shows the name greyed out - so seems to be present. If you override that and explicitly populate the box then they show again:

About an hour after upgrading from 0.9.4 → 0.9.11 I seem to be losing state (Unknown) on the climate entities:

And the stored hot water:

Heat demand and the other sensors seem to be ok at the moment, showing expected values.

Will try the resetting the state (restore_state: false) and restarting and see if that has any impact.

UPDATE: State was relearnt, i.e no Unknown, but then after 1 hour it appeared again.

A fix is coming.

Whenever I leave the restore_state as true, HomeAssistant crashes quickly after boot. Any idea why this is happening @zxdavb ? This has been happening since quite a few versions back now.

2021-06-01 13:27:44 WARNING (MainThread) [ramses_rf.transport] PktProtocolQos.send_data(RQ|02:001107|10E0|00): boff=0, want=RP|02:001107|10E0|00, tout=2021-06-01 13:27:44.608375: EXPIRED (0/0)
2021-06-01 13:29:48 ERROR (MainThread) [homeassistant] Error doing job: Task exception was never retrieved
Traceback (most recent call last):
  File "/config/custom_components/evohome_cc/__init__.py", line 271, in async_update
    new_domains = self._get_domains()
  File "/config/custom_components/evohome_cc/__init__.py", line 227, in _get_domains
    _LOGGER.info("Params = %s", evohome.params)
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/systems.py", line 151, in params
    params = super().params
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/systems.py", line 194, in params
    params = super().params
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/systems.py", line 576, in params
    return {**super().params, ATTR_ZONES: {z.idx: z.params for z in self._zones}}
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/systems.py", line 600, in params
    ATTR_UFH_SYSTEM: {
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/systems.py", line 601, in <dictcomp>
    d.id: d.params for d in sorted(self._ctl.devices) if d.type == "02"
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/devices.py", line 835, in params
    "circuits": self.setpoints,
  File "/usr/local/lib/python3.8/site-packages/ramses_rf/devices.py", line 817, in setpoints
    for c in self._setpoints.payload
AttributeError: 'NoneType' object has no attribute 'payload'
2021-06-01 13:29:48 ERROR (MainThread) [ramses_rf.message] Message(RP|01:187666|313F|00), received at 2021-06-01 13:24:49.751674: msg has tombstoned (2021-06-01 13:29:48.982111, 0:00:03)