About the cashing: I had it off for the restart after installing the new version as I thought this was required. Didn’t want to restart too soon after the initial reboot, so had it still set to False, changed it now.
I’ll have a look at what fan sensors have completely disappeared and/or try to “force” them by using the remote for instance. Might be just that it was cleared from cache and hasn’t been “seen” yet. It were still very early observations though, so might be back sooner rather than later.
Too bad that it is now best practice to not add those to the dashboard. I understand it from a design perspective but I really like the autopopulated dashboard to have an overview from everything in the system. It’s my go to for troubleshooting. Also the windows and battery ones are quite useful to have.
No it’s 03:255542. It’s a thermostat type device without an option to set the temperature or for lack of a better description a room temperature sensor, made by Honeywell. It does appear to be read by ramses, as the temperature appears on the climate card just fine. The entities sensor.03_255542_temperature and binary_sensor.03_255542_battery_low however are always unavailable.
It appears this one was replaced by “sensor.37_182456_remaining_mins” most others have appeared to have come back to live Except speed cap, but this one was quite useless anyway.
I’m still seeing problems with some OTB sensors in 0.31.3. I’ve been running a series of tests on 0.31.2 using different values for use_native_ot but no obvious pattern emerging. So far I’ve seen the same behaviour on 0.31.3. Of the OTB sensors that worked reliably on 0.21.40 and 0.30.9, some are reliable on 0.31.x and some are not:
These sensors seem to work reliably on all versions:
The unreliability can either be permanently unavailable or intermittent - i.e. showing good data after a restart but becoming unavailable after a few hours, or intermittently being unavailable. Sometimes an unavailable sensor will “come back” after a restart, sometimes not.
Here’s what I’ve found so far. I’ve tried to run each configuration for at least 12 to 24 hours as it can take time for the sensor unreliability to show up. Anything marked “OK?” just means I didn’t see any unreliability in a relatively short run. I’m planning to go back to 0.30.9 for an extended run just to verify I’m getting good data on all sensors in that version.
0.31.2 and 0.31.3: I’m seeing warnings in the log about every 20 minutes which seem to relate to my Honeywell DT4R thermostat (the newer, square type):
2024-01-21 10:27:54.238 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for src to Tx)
A section from the packet log around the same time:
Hi David, on 0.31.3 the 01: device active_fault binary sensor seems to be permanently unavailable. will leave it 24 hours and see what happens. Does anyone else have this as well?
Just to confirm, after rolling back to 0.30.9 all my OTB sensors look good for the first hour of running.
I will do a longer run on 0.30.9 to confirm that. This is with use_native_ot: prefer. On close examination of the history I have noticed that binary_sensor.10_064873_fault_present and binary_sensor.10_064873_dhw_enabled are periodically going unavailable for exactly 1 minute but come back each time.
Apart from the OTB sensor issues and log warnings relating to the DT4R thermostat I didn’t see any other issues on 0.31.3
This is not unexpected - it has the device type of a DTS92(A), and is treated as such by ramses_rf, but is clearly a DT4R!
This issue is caused by assumptions I didn’t even know I was making, some 3-4 years go - I have a plan to fix this, but it will require a significant re-write…
Please provide the corresponding traceback in homeassist.log… Something like:
RuntimeError: Faking is not enabled for 32:168090 (HUM)
2024-01-22 13:44:27.173 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection] [547771902272] Error handling message: Unknown error (unknown_error) David Bonnes from 172.27.0.246 (Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36)
Traceback (most recent call last):
...
File "/usr/src/homeassistant/homeassistant/core.py", line 691, in async_run_hass_job
hassjob.target(*args)
File "/config/custom_components/ramses_cc/sensor.py", line 244, in async_put_indoor_humidity
self._device.indoor_humidity = indoor_humidity / 100 # would accept None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ramses_rf/device/hvac.py", line 122, in indoor_humidity
raise RuntimeError(f"Faking is not enabled for {self}")
RuntimeError: Faking is not enabled for 32:168090 (HUM)
Experiencing some other weirdness with the sending of commands with a impersonating remote of the ventilation unit. At first I hadn’t noticed the service was changed to “ramses_cc.send_command” so my automation for the fan stopped working. But after changing it to “ramses_cc.send_command” I’m seeing spotty response from the ventilation unit. Sometimes it will respond, sometimes it won’t.
I still get fairly regular occurrences of zone demand entities going unavailable (e.g. sensor.04_038777_heat_demand and sensor.01_215596_03_heat_demand) - always in pairs for specific zones so I guess it’s probably because the controller hasn’t heard a demand packet from the TRV for a while, and has therefore not transmitted a demand packet for that zone. While I understand your reasoning behind not showing ‘stale’ information if a packet certain hasn’t been received for a while, it would be nice if it was possible to adjust the timeout that it allows, and even set it to an infinite value if required. I generally only use HA for monitoring my Evohome system (save for a couple automations that adjust setpoints based on other triggers e.g. turning down the heating on days when I’m working in the office etc), so I’d rather use things such as the ‘last update time’ to indicate potential stale information in my dashboards, than having the entity show as ‘unavailable’. This issue isn’t specific to any of the recent beta versions, more of a general observation/opinion.
However recently (and I think probably since 0.31.x??) I’ve noticed that some of my climate entities have been also going into an ‘unknown’ state:
e.g. climate.ramses_cc_01_215596_03 currently shows null ‘modes’ in the attributes:
I’ve never noticed this happening before, so I don’t know if this is possibly a new problem. When it happens, I’ve found I can ‘correct’ it by just clicking the auto mode button on the climate entity. It doesn’t seem to happen in conjunction with the issue I mentioned above with the zone demand entities going unavailable.
Edit, sorry I forgot to add, the binary_sensor.01_215596_active_fault also goes unavailable several hours after a restart. The time it stays around for seems to vary with different versions, I had a period where it would appear for 20 minutes and then vanish for the next 5 hrs 40 mins (0.29.x was installed at that time I think??), with 0.31.2 and 0.31.3 it’s stayed around much longer, hard to spot any pattern in the history because I’d done lots of restarts over the weekend. But the entity is still going unavailable.
Updated to 0.31.3 and looks good thank you.
Maybe this is just me but the icons for my relays look swapped. I expected them to be closed when on and open when off
I am still getting the warning that my config schema is not minimal despite you advising previously that it is minimal.