Thanks. It doesn’t work for me any more.
I think I’ve confused my controller.
To move forward with schedules, I’ll have to wait until I can reset it, or pull my other controller out of storage.
Thanks. It doesn’t work for me any more.
I think I’ve confused my controller.
To move forward with schedules, I’ll have to wait until I can reset it, or pull my other controller out of storage.
Interestingly this is one of things that finally made me find time to upgrade today from 0.22.1 to 0.31.19. I found that the service call to grab schedules had stopped working, but unfortunately I can’t tell you exactly when. It had been working for many months prior to that. I made the assumption that an update to HA had changed something low level.
I’m using ramses cc and I have a strange issue where I can receive information from the dongle (i.e. zone state, desired temperature, etc), but when I try to change anything from home assistant, there is no change in the system.
For example changing the desired temperature to 20C gives me the following entries in the log:
2024-04-29 15:00:43.478 INFO (MainThread) [ramses_rf.dispatcher] || 18:262143 | 01:085159 | W | setpoint | 00 || {'zone_idx': '00', 'setpoint': 20.0}
2024-04-29 15:00:55.887 INFO (MainThread) [ramses_tx.transport] Rx: b'054 I --- --:------ --:------ 10:062086 1FD4 003 0018D9\r\n'
2024-04-29 15:00:55.887 INFO (MainThread) [ramses_tx.protocol] Recv'd: 054 I --- --:------ --:------ 10:062086 1FD4 003 0018D9
but nothing changes on the thermostat and a short while later I see the original desired temperature being received in the logs again:
2024-04-29 15:01:12.009 INFO (MainThread) [ramses_tx.transport] Rx: b'071 I --- 01:085159 --:------ 01:085159 2309 003 00076C\r\n'
2024-04-29 15:01:12.009 INFO (MainThread) [ramses_tx.protocol] Recv'd: 071 I --- 01:085159 --:------ 01:085159 2309 003 00076C
2024-04-29 15:01:12.009 INFO (MainThread) [ramses_rf.dispatcher] || 01:085159 | | I | setpoint | 00 || {'zone_idx': '00', 'setpoint': 19.0}
Same goes for system and zone mode changes. I’ve checked the packet log and see plenty of ‘RQ’ lines, like
2024-04-29T15:46:51.500461 000 RQ --- 18:262143 10:062086 --:------ 3220 005 0000000000
but I don’t see any ‘RP’ lines, which would suggest to me that the dongle is not sending correctly.
My config is:
serial_port: /dev/ttyUSB0
restore_cache: true
packet_log:
file_name: packet.log
rotate_backups: 14
ramses_rf:
enforce_known_list: false
use_native_ot: prefer # always, prefer (default), avoid, never
01:085159: # Temperature control system (e.g. evohome)
system:
appliance_control: 10:062086
zones:
"00": { sensor: 01:085159 }
known_list:
01:085159: # controller
18:262143: # gateway_interface
10:062086: # opentherm_bridge
Can anyone point me in the right direction on how I can analyze and fix this issue?
P.S. I’ve just upgraded to the beta version of ramses_rf, with no change in this problem unfortunately.
Is the service call working for you with v0.31.19?
Your post wasn’t clear to me, either way.
Sorry, that was very unclear. No, the service call is not working - I get timeout errors in the log from ramses_rf
2024-05-01 17:46:20.169 ERROR (MainThread) [homeassistant.helpers.script.websocket_api_script] websocket_api script: Error executing script. Unexpected error for call_service at pos 1: Failed to obtain schedule within 15 secs
Traceback (most recent call last):
File "/usr/local/lib/python3.12/asyncio/tasks.py", line 520, in wait_for
return await fut
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/schedule.py", line 250, in _get_schedule
await self.tcs._obtain_lock(self.idx) # maybe raise TimeOutError
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/heat.py", line 622, in _obtain_lock
await asyncio.sleep(0.005) # gives the other zone enough time
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/tasks.py", line 665, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/schedule.py", line 219, in get_schedule
await asyncio.wait_for(
File "/usr/local/lib/python3.12/asyncio/tasks.py", line 519, in wait_for
async with timeouts.timeout(timeout):
File "/usr/local/lib/python3.12/asyncio/timeouts.py", line 115, in __aexit__
raise TimeoutError from exc_val
TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 507, in _async_step
await getattr(self, handler)()
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 742, in _async_call_service_step
response_data = await self._async_run_long_action(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 705, in _async_run_long_action
return await long_task
^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2543, in async_call
response_data = await coro
^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2580, in _execute_service
return await target(service_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 971, in entity_service_call
single_response = await _handle_entity_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 1043, in _handle_entity_call
result = await task
^^^^^^^^^^
File "/config/custom_components/ramses_cc/climate.py", line 482, in async_get_zone_schedule
await self._device.get_schedule()
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/zones.py", line 153, in get_schedule
await self._schedule.get_schedule(force_io=force_io)
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/schedule.py", line 223, in get_schedule
raise TimeoutError(
TimeoutError: Failed to obtain schedule within 15 secs
2024-05-01 17:46:20.185 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection] [281472773568448] Error handling message: Timeout (timeout) Lloyd from 192.168.98.11 (Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36 Edg/124.0.0.0)
Who is using this integration with proxmox please?
Who had tried and failed to get it working?
Who has got it working OK? What settings are you using, what dongle?
On proxmox here, using HAOS qemu VM - no issues. Just basic USB passthrough, has worked fine with nanocul, SSM-D2 and busware. Can’t speak for HA core running venv or via docker in an LXC container - I can see that throwing up gremlins.
Was 0.31.19 supposed to address this as I’m still seeing the same errors?
Versions 0.x.20 have been released today.
Other than that, you’ll have to provide me with more context before I can answer your question.
Sorry. Replied to your earlier post in response to mine and that wasn’t clear.
When trying to write schedules, I’m still getting:
Logger: homeassistant.components.script.normal_heating_script
Source: helpers/script.py:501
integration: Scripts (documentation, issues)
First occurred: 20:44:57 (3 occurrences)
Last logged: 20:52:01
Normal heating schedule: Error executing script. Unexpected error for call_service at pos 3: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0303, tx_count=4/4>: Exceeded maximum retries
Normal heating schedule: Error executing script. Unexpected error for call_service at pos 3: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0302, tx_count=4/4>: Exceeded maximum retries
Normal heating schedule: Error executing script. Unexpected error for call_service at pos 2: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0702, tx_count=4/4>: Exceeded maximum retries
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 501, in _async_step
await getattr(self, handler)()
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 736, in _async_call_service_step
response_data = await self._async_run_long_action(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 699, in _async_run_long_action
return await long_task
^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2738, in async_call
response_data = await coro
^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2779, in _execute_service
return await target(service_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 975, in entity_service_call
single_response = await _handle_entity_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 1047, in _handle_entity_call
result = await task
^^^^^^^^^^
File "/config/custom_components/ramses_cc/climate.py", line 473, in async_set_zone_schedule
await self._device.set_schedule(json.loads(schedule))
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/zones.py", line 171, in set_schedule
await self._schedule.set_schedule(schedule) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/schedule.py", line 404, in set_schedule
await put_fragment(num, len(self._fragments), frag)
File "/usr/local/lib/python3.12/site-packages/ramses_rf/system/schedule.py", line 373, in put_fragment
await self._gwy.async_send_cmd(
File "/usr/local/lib/python3.12/site-packages/ramses_rf/gateway.py", line 619, in async_send_cmd
return await super().async_send_cmd(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/gateway.py", line 328, in async_send_cmd
return await self._protocol.send_cmd(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/protocol.py", line 707, in send_cmd
pkt = await super().send_cmd( # may: raise ProtocolError/ProtocolSendFailed
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/protocol.py", line 481, in send_cmd
return await super().send_cmd(cmd, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/protocol.py", line 225, in send_cmd
return await self._send_cmd(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/protocol.py", line 656, in _send_cmd
return await self._context.send_cmd(send_cmd, cmd, priority, qos)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_tx/protocol_fsm.py", line 333, in send_cmd
await asyncio.wait_for(fut, timeout=timeout)
File "/usr/local/lib/python3.12/asyncio/tasks.py", line 520, in wait_for
return await fut
^^^^^^^^^
ramses_tx.exceptions.ProtocolSendFailed: <ProtocolContext state=WantEcho cmd_=0404| W|01:065252|0303, tx_count=4/4>: Exceeded maximum retries
I’m now on 0.41.20
Hi just updated to latest version and all Ramses devices say unavailable. All previous updates seem to be fine. I am on Proxmox not sure if thats the problem.
Get the following errors:
Logger: homeassistant.config_entries
Source: config_entries.py:575
First occurred: 12:20:47 (1 occurrences)
Last logged: 12:20:47
Error setting up entry RAMSES RF for ramses_cc
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 575, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/ramses_cc/__init__.py", line 89, in async_setup_entry
await broker.async_setup()
File "/config/custom_components/ramses_cc/broker.py", line 146, in async_setup
await self.client.start(cached_packets=cached_packets())
File "/usr/local/lib/python3.12/site-packages/ramses_rf/gateway.py", line 183, in start
load_schema(self, known_list=self._include, **self._schema) # create faked too
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/schemas.py", line 353, in load_schema
load_tcs(gwy, ctl_id, schema) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/schemas.py", line 394, in load_tcs
ctl = _get_device(gwy, ctl_id)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ramses_rf/schemas.py", line 336, in _get_device
check_filter_lists(dev_id)
File "/usr/local/lib/python3.12/site-packages/ramses_rf/schemas.py", line 332, in check_filter_lists
raise LookupError(
LookupError: Can't create 01:000730: it is in the schema, but not in the known_list (check the lists and the schema)
Well, is 01:000730 in your known_list? If not, you could try adding it?
What is more likely is that this is a corrupt packet.
This suggest a hardware problem, which would explain all his issues.
Thanks for responses. The only change was an update to latest version, all was previously working so seems a slightly strange coincidence that hardware would fail unless the new version is a little fussier.
Indeed 01:000730 did not appear in known devices - I have now added it to my config, as per my below, however the log error is identical.
I do have a Honeywell opentherm connector on my boiler not located near the nanocul which never previously connected, could it be this got connected?
I am happy to change the nanocul (which is quite old) to latest whichever one you recommend.
Appreciate any help or advice, thanks.
ramses_cc:
serial_port: /dev/serial/by-id/usb-SHK_NANO_CUL_868-if00-port0
# baudrate: 115200
packet_log:
file_name: packet.log
rotate_backups: 7
ramses_rf:
enforce_known_list: true
known_list:
18:196881: #gateway_interface
#18:196881: {class: HGI}
01:081083: #main TCS
01:000730:
04:023774: #Hall Actuator
04:023780: #Studies Sensor class': 'radiator_valve
04:027356: #Studies Actuator and 80 above
04:027648: #Lounge class: radiator_valve and sensor
04:027358: #Lounge Sensor
04:027360: #Niamh SEnsor & Actuator class': 'radiator_valve
04:027646: #Ellie SEnsor & Actuator class': 'radiator_valve
04:027650: #Hall Actuator
04:027656: #TV Lounge sensor and actator class': 'radiator_valve
04:027714: #Hall Actuator
04:027716: #Master Bed SEnsor & Actuator class': 'radiator_valve
04:027718: #Craig SEnsor & Actuator class': 'radiator_valve
04:027720: #Hall Actuator
04:027722: #Hall Landing Sensor and actuator class': 'radiator_valve
07:025191: #Stored Hot Water Sensor
13:001489: #Kitchen Diner Actuator
13:001522: #Ensuite Actuator
13:055679: #Grnd Bed Actuator
13:173107: #Hot water Valve
13:224329: #games room Actuator
34:027677: #Grnd Bed Sensor class': 'zone_valve
34:063533: #games Room sensor class': 'zone_valve'
34:150571: #EnSuite Sensor class': 'zone_valve
34:150655: #Kitchen Diner SEnsor class': 'zone_valve
# seperate program
# logger:
# logs:
# custom_components.ramses_cc: info # show ramses_cc/ramses_rf state
# ramses_rf.dispatcher: info # show packet payloads
Why have you added this to your known_list? Can you point to the corresponding physical device in your house?
I wasn’t clear: my suggestion was that is was a device id from a corrupted packet. The device doesn’t exist.
The ‘Opentherm connector’ has a device_id starting with 10:
. If you look in your packet log, you should see lots of packets to/from this device.
I am afraid that all evidence I have is that you have a (virtual?) hardware problem.
Have you tried tuning it? What are your RSSI values?
Is there a method available now, or being worked on, that doesn’t require the stick to be plugged in to the actual HA machine? I don’t mean ser2net.
My physical box is about to die and I want to move HA back to my VM platform.
I am sure I saw mention of something either available or being worked on, but at over 4300 posts in this thread, finding it has been impossible!
MQTT is coming soon, see here.
I have one, and it works well. You can put them wherever it has USB power, and can reach Wi-Fi.
When he’s ready, he’ll sell them here.
RuntimeError: no running event loop
means the task is being created in the wrong thread. Please make sure any calls to self._loop.create_task(coro)
are running in the event loop thread as creating the task from the wrong thread is not thread-safe and may crash Home Assistant.