Hi folks,
For the past 6 months or so, I’ve been using the below rest command to reset some Aqara sensors in several automations. I recently noticed that they weren’t resetting anymore. Originally I thought it was just one of the sensors, but then I noticed that it was all of them.
rest_command:
reset_motion_sensor:
url: https://myhassdns/api/states/{{sensor}}
method: 'POST'
headers:
authorization: Bearer (bearer removed)
content_type: "application/json"
payload: '{"state": "off"}'
verify_ssl: true
- id: '1582110183955'
alias: Downstairs Bathroom - Turn on light and reset sensor
trigger:
- entity_id: binary_sensor.downstairs_bathroom_motion_sensor
platform: state
to: 'on'
condition:
- after: 08:00:00
before: '20:30:00'
condition: time
action:
- data:
brightness: 255
entity_id: light.downstairs_bathroom_light
service: light.turn_on
- delay: 00:00:06
- data:
sensor: binary_sensor.downstairs_bathroom_motion_sensor
service: rest_command.reset_motion_sensor
I can post this command via postman, and it works instantly (and changes reflect in lovelace), but when being called from the automation (or service in developer mode) I get the above timeout error. The automation does do the other items, it just times out on the reset, so lights stay on longer than I’d like.
In the logs, it just gives this warning:
2020-12-04 09:00:02 WARNING (MainThread) [homeassistant.components.rest_command] Timeout call https://myhassdns/api/states/binary_sensor.hall_motion_sensor
2020-12-04 09:00:04 WARNING (MainThread) [homeassistant.components.rest_command] Timeout call https://myhassdns/api/states/binary_sensor.downstairs_bathroom_motion_sensor
2020-12-04 09:04:49 WARNING (MainThread) [homeassistant.components.rest_command] Timeout call https://myhassdns/api/states/binary_sensor.downstairs_bathroom_motion_sensor
I don’t see any other errors or warnings other than these repeating.
This was after the 118 to 118.4 upgrade (I did not do any interim upgrades) Did I miss something in the release notes that may have changed this call?
A few things about my install:
- I use the hassos vm install.
- hassos install resolves DNS to my proxy VM (caddy) and other sites fine (see cisco.com below)
- but if i check dns logs, I do see these errors. I’m not sure if they are relevant or not as I don’t know the docker internal network layout
[INFO] 172.30.32.1:36209 - 8276 "AAAA IN cisco.com. udp 27 false 512" NOERROR qr,rd,ra 64 0.013542823s
[INFO] 172.30.32.1:36209 - 7953 "A IN cisco.com. udp 27 false 512" NOERROR qr,rd,ra 52 0.01363198s
[INFO] 127.0.0.1:54633 - 56340 "NS IN . udp 17 false 512" NOERROR - 0 30.000282224s
[ERROR] plugin/errors: 2 . NS: dial tcp 1.0.0.1:853: i/o timeout
[INFO] 127.0.0.1:49613 - 34163 "NS IN . udp 17 false 512" NOERROR - 0 30.000238157s
[ERROR] plugin/errors: 2 . NS: dial tcp 1.1.1.1:853: i/o timeout
- proxy handles it properly
- proxy works from internal or external networks
- postman works from my desktop
- I have other HASS integrations like tasmoadmin that use the proxy
Any help (or other things to look at) you can provide would be awesome.
Thanks!!