@peternash I’d be interested if you could try ramses_rf 0.31.18 (on pypi) with the latest ramses_cc - you should be able to simply edit the manifest.json file.
You can try with/without disable_qos: false (i.e. the alternative is not to have disable_qos at all.
@zxdavb I’m now running ramses_cc 0.41.16 with ramses_rf 0.31.18 (confirmed by inspecting version.py) I assume that’s what you meant as I can’t access ramses_cc 0.41.17 now.
I’ve not seen any problems so far and no apparent difference with/without disable_qos: false
I’ve tried temperature overrides and heating mode changes using the Thermostat card and the climate services which were failing in ramses_cc 0.41.17 and they’re working OK.
Nothing unusual in the log either.
I’ll leave it running for a while and see how it behaves.
I have the SSM-D2 but I don’t seem to be able to get Ramses RF to recognise it. (I’m using Hassos on a Raspberry Pi 4B 8GB and want to connect Honeywell and Nuaire devices).
Well I look at the list of hardware in HA , I don’t see a tty01 or similar .
@zxdavb I’ve not seen any significant issues in about 12 hours. I’ve seen two Evohome communication drop outs which is unusual but may be coincidence. There are more warnings in the logs then usual - I’ve attached an hour’s worth below. The ones from ramses_rf.dispatcher about 22:012299 have been occurring for a while but the PacketInvalid messages from ramses_tx.transport are new with this configuration.
This was without disable_qos: false. I’ll try setting that and see if there’s any change in the logs.
@zxdavb Update: After rebooting HA for other reasons I’m no longer seeing as many warnings in the logs. Previously I’d only done a restart after making changes. I’m still running ramses_cc 0.41.16 with ramses_rf 0.31.18 without disable_qos: false on Home Assistant Blue with an SSM-D2. I believe a reboot cycles the power on the SSM-D2 whereas a restart does not so maybe that was significant.
The only warnings I’m seeing at the moment are the same as on recent ramses_cc versions which I think relate to my DT4R thermostat.
2024-04-19 03:16:21.228 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for dst to Rx)
2024-04-19 03:35:06.225 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for dst to Rx)
2024-04-19 03:53:51.220 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for dst to Rx)
2024-04-19 04:12:36.212 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for dst to Rx)
2024-04-19 04:31:21.209 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for dst to Rx)
2024-04-19 04:50:06.206 WARNING (MainThread) [ramses_rf.dispatcher] W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < PacketInvalid( W --- 22:012299 01:216136 --:------ 22C9 006 01076C09F601 < Unexpected code for dst to Rx)
the slug is being detected correctly (i.e. the CTL)
ramses_rf is not able to stop itself from throwing an exception for this message, even though it should not be doing so
Fixing this issue would require re-writing a portion of the code that is deep within it’s core logic - a risky business that I won’t take on at the moment.
I’ve been playing with writing schedules and it was working. I’ve written a script that I intend to use to restore the default schedule. It seems to fail now that I have service calls for 4 zones one after the other. This caused a complete loss of Ramses RF last night and I’ve tried it again this morning, with the following errors appearing in my logs:
Logger: homeassistant.components.script.normal_heating_script
Source: helpers/script.py:507
integration: Scripts (documentation, issues)
First occurred: 10:04:44 (2 occurrences)
Last logged: 10:05:41
Normal heating schedule: Error executing script. Unexpected error for call_service at pos 4: 'NoneType' object has no attribute 'src'
Normal heating schedule: Error executing script. Unexpected error for call_service at pos 1: 'NoneType' object has no attribute 'src'
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 507, in _async_step
await getattr(self, handler)()
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 736, in _async_call_service_step
response_data = await self._async_run_long_action(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 699, in _async_run_long_action
return await long_task
^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2543, in async_call
response_data = await coro
^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2580, in _execute_service
return await target(service_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 971, in entity_service_call
single_response = await _handle_entity_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 1043, in _handle_entity_call
result = await task
^^^^^^^^^^
File "/config/custom_components/ramses_cc/climate.py", line 487, in async_set_zone_schedule
await self._device.set_schedule(json.loads(schedule))
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/zones.py", line 156, in set_schedule
await self._schedule.set_schedule(schedule)
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/schedule.py", line 373, in set_schedule
self._global_ver, _ = await self.tcs._schedule_version(force_io=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/heat.py", line 630, in _schedule_version
self._msg_0006 = Message(pkt)
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/message.py", line 64, in __init__
self.src: Address = pkt.src
^^^^^^^
AttributeError: 'NoneType' object has no attribute 'src'
This error originated from a custom integration.
Logger: ramses_rf.gateway
Source: custom_components/ramses_cc/climate.py:487
integration: RAMSES RF (documentation, issues)
First occurred: 10:04:33 (8 occurrences)
Last logged: 10:05:41
Failed to send 0404| W|01:065252|0202: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0202, tx_count=4/4>: Exceeded maximum retries
Failed to send 0404| W|01:065252|0203: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0203, tx_count=4/4>: Exceeded maximum retries
Failed to send 0006|RQ|01:065252: <ProtocolContext state=WantEcho cmd_=0006|RQ|01:065252, tx_count=4/4>: Exceeded maximum retries
Failed to send 0404| W|01:065252|0601: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0601, tx_count=4/4>: Exceeded maximum retries
Failed to send 0404| W|01:065252|0602: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0602, tx_count=4/4>: Exceeded maximum retries
Just a heads up that the release notes on GH for 0.14.19 state “does not utilize config flow” a couple of times which I know is just a copy-paste error but could lead to confusion!
I’m trying to get my HGI80 to work based on a ser2net connection. I’ve setup ser2net.yaml as told in the wiki. But I don’t see any information from my EvoHome system.
In HA log I see this message:
“Detected blocking call to sleep inside the event loop by custom integration ‘ramses_cc’ at custom_components/ramses_cc/broker.py, line 127: await self.client.start(cached_packets=cached_packets()), please create a bug report at Issues · zxdavb/ramses_cc · GitHub”
Upgraded to 0.31.19 and 0.41.19 on a separate system. Both were running fine for about 8 hours until I got a fatal crash on both. I am wondering whether the broken schedule service (worked previously) is causing it as I have an automation that runs on the hour to retrieve schedules. I’ll disable it for now to see whether it is the cause.
Error doing job: Task exception was never retrieved
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/ramses_rf/gateway.py", line 616, in async_send_cmd
assert pkt # mypy
^^^^^^^^^^
AssertionError