HA stops work every Monday

Possibly, because when I login I immediately get the ha> prompt as opposed to the $ prompt.

Screenshot from 2021-01-13 11-21-49

If I then enter the login command, I immediately get the # prompt.

I don’t know why your login session is different.

Good Morning,
Unfortunately, my plan to switch the log level to debug did not work out.
HA usually stops at around 8:30 a.m.
So I tried to change the log level to Debug at 7:00 a.m.
Unfortunately I couldn’t edit the YAML’s at this point.
A restart of HA via the GUI was also no longer possible. The automations were still running, however.
The system stopped again at 8:30 a.m. :pensive:
Then, as suggested, I switched off the RASPI and inserted the SD card into my computer to read the LOG. Unfortunately, it is not so easy for me to access the HASSOS file system from Windows.
In order to be able to look at it in peace, I made an IMG from the SD card.
I can now at least access my last HA log. But this is only the normal log mode.

Unfortunately, the IP of one of my integrated devices was changed at the weekend, so that I am flooded with unavailable messages in the LOG. I don’t think that was the reason.
I have now deleted a few unused HACS integrations.
What I notice is that I have quite a few entities that are orphaned. But I can’t delete them either.

Does anyone else notice anything unusual?

2021-01-16 07:45:13 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:13 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for fontawesome which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:13 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for anniversaries which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:13 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for alexa_media which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:13 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for cololight which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:13 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for browser_mod which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:14 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for waste_collection_schedule which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2021-01-16 07:45:15 WARNING (MainThread) [homeassistant.components.lovelace] Lovelace is running in storage mode. Define resources via user interface
2021-01-16 07:45:27 WARNING (MainThread) [homeassistant.setup] Setup of browser_mod is taking over 10 seconds.
2021-01-16 07:45:36 ERROR (MainThread) [homeassistant.components.websocket_api.http.connection] [2946499680] Error handling message: Unknown error
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/connection.py", line 95, in async_handle
    handler(self.hass, self, schema(msg))
  File "/config/custom_components/browser_mod/connection.py", line 42, in handle_update
    devices[deviceID].update(msg.get("data", None))
  File "/config/custom_components/browser_mod/connection.py", line 83, in update
    self.sensor = self.sensor or create_entity(
  File "/config/custom_components/browser_mod/helpers.py", line 47, in create_entity
    adder = hass.data[DOMAIN][DATA_ADDERS][platform]
KeyError: 'sensor'
2021-01-16 07:46:17 WARNING (MainThread) [homeassistant.bootstrap] Waiting on integrations to complete setup: hacs, mqtt, browser_mod
2021-01-16 07:47:04 WARNING (SyncWorker_0) [pyhomematic._hm] Failed to initialize proxy for homeassistant-Wired
2021-01-16 07:47:12 WARNING (MainThread) [homeassistant.setup] Setup of homematic is taking over 10 seconds.
2021-01-16 07:47:15 WARNING (SyncWorker_0) [pyhomematic._hm] Skipping init for homeassistant-ccu3
2021-01-16 07:47:17 WARNING (MainThread) [homeassistant.bootstrap] Waiting on integrations to complete setup: browser_mod
2021-01-16 07:47:51 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.template.sensor] The 'entity_id' option is deprecated, please remove it from your configuration
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.songpal.media_player] [Soundbar(http://192.168.0.82:10000/sony)] Unable to connect
2021-01-16 07:47:52 WARNING (MainThread) [homeassistant.components.media_player] Platform songpal not ready yet. Retrying in 30 seconds.
2021-01-16 07:47:53 ERROR (MainThread) [homeassistant.util.logging] Exception in async_discover_sensor when dispatching 'mqtt_discovery_new_sensor_mqtt': ({'name': 'Bd_Fenster STATE Battery', 'state_topic': 'homeassistant/sensor/bd_fenster_state_battery/state', 'value_template': '{%- if value_json.value = High -%} 100 {%- else -%} 30 {%- endif -%}', 'icon': 'mdi:battery', 'unique_id': 'bd_fenster_state_battery', 'json_attributes_topic': 'homeassistant/sensor/bd_fenster_state_battery/attributes', 'platform': 'mqtt'},)
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/mqtt/sensor.py", line 84, in async_discover_sensor
    config = PLATFORM_SCHEMA(discovery_payload)
  File "/usr/local/lib/python3.8/site-packages/voluptuous/schema_builder.py", line 272, in __call__
    return self._compiled([], data)
  File "/usr/local/lib/python3.8/site-packages/voluptuous/schema_builder.py", line 594, in validate_dict
    return base_validate(path, iteritems(data), out)
  File "/usr/local/lib/python3.8/site-packages/voluptuous/schema_builder.py", line 432, in validate_mapping
    raise er.MultipleInvalid(errors)
voluptuous.error.MultipleInvalid: invalid template (TemplateSyntaxError: expected token 'end of statement block', got '=') for dictionary value @ data['value_template']

2021-01-16 07:47:55 WARNING (MainThread) [homeassistant.components.songpal.media_player] [Soundbar(http://192.168.0.81:10000/sony)] Unable to connect
2021-01-16 07:47:55 WARNING (MainThread) [homeassistant.components.media_player] Platform songpal not ready yet. Retrying in 30 seconds.
2021-01-16 07:48:23 WARNING (MainThread) [homeassistant.components.songpal.media_player] [Soundbar(http://192.168.0.82:10000/sony)] Unable to connect
.........
2021-01-18 02:24:09 WARNING (MainThread) [homeassistant.components.songpal.media_player] [Soundbar(http://192.168.0.81:10000/sony)] Unable to connect
2021-01-18 02:24:09 WARNING (MainThread) [homeassistant.components.media_player] Platform songpal not ready yet. Retrying in 180 seconds.

So I have exactly the same issue, and have had for a few weeks now, I wake up on a Monday morning and I can’t connect to HA, on my phone when I try to open the app I get a red bar at the top saying unable to connect to server, or something like that, the only way out so far has been to pull the power to my Raspberry Pi (which is running HASS OS latest version), because of that I’ve got no log data from before then, all I have is sensor data and it seems to stop around Midnight on Sunday night/Monday morning at 00:20 or thereabouts. It’s taken a couple of weeks to realise this is something that happens every week, at around the same time, I have no automations that run specifically on a weekly basis at that timeI do havea couple of custom integrations but none that you have @DomJo (I do have a garbage collection one). Will try to do a bit more analysis when I get time

I have had a debug LOG created.
Here you can see that logging stops at around 1 a.m. on sunday. And all in one line, in the middle of a word.
So far, I have not been able to determine why this is so.
But I also use garbage collection. Maybe he has a problem reading the ICS file ?!

I’m still having this issue, I may disable the garbage collection custom component next Sunday evening and see if The issue still occurs, That component uses week numbers so could easily be the issue

I’m having similar issues. See this tread.

For the last few weeks my home assistant crashes every Monday morning around 1:30AM. Automations seem to stop working, Samba and SSH access are down. The front-end is still available most of the times (not always), but the history is not working anymore. The supervisor tab also shows a blank screen. Trying to shutdown or reboot via server configuration doesn’t do anything. The only way to get things up and running again is by power-cycling the Rpi. Sometimes more than one power cycle is needed. After a successful reboot I see that the last history recordings have happened around 1:30AM. After reboot I cannot check the logs anymore what caused the crash. Before reboot I cannot see the supervisor or host logs since they are on the blank supervisor page. Before reboot I can see the core log, but so far I do not see the cause there. All I see is recorder errors, because the database is down. What would be the best way to find the root-cause? I tried disabling certain automations, configurations and integrations on Sunday evening and see if it crashes overnight. No luck so far and I feel that this trail and error route might take ages with only one trail a week. Is there a way to read back the logs before they are erased? (power-off hte Rpi after the crash, take out the SD-card, read the logs on my windows PC, before restarting the Rpi?) I read a lot off issues can occur due to corrupted SD cards. How can I check if the SD is OK or not? Any other Ideas on how to find the root-cause on these crashes?
Running Home Assistant OS on Rpi 3B+, regular updates of core and OS are done. Always running the latest or previous version. None of the updates so far solved the issue.

I’m not running a garbage collector custom integration. I am using a template sensor (boolean) based on weeknumber and a workday sensor (NL) which is used as a condition in some automations. Last week I disabled these automations and the template sensor on Sunday, but overnight HA still crashed. I did not disable the workday sensor. I’ll try that next time and increase the logging to ‘debug’ as Tom_I suggested.

I also use workday sensor…

I don’t use garbage collector and workday sensor, and every monday i need to reboot Homeassistant, in the last Monday I had access to HA but supervisor not accessible, need to reboot

PS: i dont save logs…but i remember to see in the log, errors from hacs, but in next monday try to post the log

My first suspect is the UPC Connect Box integration which is anyhow generating a lot of errors in the log. Are you using that too?

No, in hacs i only use Alexa media player, alarmo, SamsungTV Smart, sonoff lan

Hi,

I have now switched to OpenHab3, completely annoyed by the error.

Still, I’m interested in what triggers the error in the end.

I also had problems with OpenHab when reading my ICS file for garbage disposal.

Here the system stopped after a very short time.

In the end it was down to the interval how often the ICS was read. Perhaps this is also the problem with HA.

Regards

Domjo

I don’t use this, my custom integrations are:
garbage_collection
alexa_media_player
hive_custom_component

Another Monday and another dead HA. I managed to get the log from the SD card before restarting, but there’s nothing useful. Last entry was 01:25:54, but searching through the prior 30 minutes of entries, it’s just messages about “not receiving data from Garmin Connect” (which I get all the time, it’s a known issue - supposed to be fixed in the next release) and a few messages about “IPP integration not ready” as the printer was off - again, a normal message I get all the time (if the printer’s off).

Yesterday evening I disabled some items in my HA and switched to debug logging. (Things I switched off: AArlo, alarm panel, Binary template sensor using workday sensor, Workday sensor, Hacs, Airpurifier, UPC device tracker). This morning it turned out that HA crashed again overnight. The log file was huge (69MB and 820181 lines). It really stopped halfway a very standard (DSMR smart metering) log line. I cannot find any obvious reason for the crash in the log file. These are the last 90 lines:

2021-03-01 01:46:55 DEBUG (MainThread) [dsmr_parser.clients.protocol] got telegram: /Ene5\XS210 ESMR 5.0

1-3:0.2.8(50)
0-0:1.0.0(210301014750W)
0-0:96.1.1(4530303437303030303637333337363139)
1-0:1.8.1(002413.569*kWh)
1-0:1.8.2(001787.332*kWh)
1-0:2.8.1(000000.010*kWh)
1-0:2.8.2(000000.000*kWh)
0-0:96.14.0(0001)
1-0:1.7.0(00.105*kW)
1-0:2.7.0(00.000*kW)
0-0:96.7.21(00003)
0-0:96.7.9(00001)
1-0:99.97.0(0)(0-0:96.7.19)
1-0:32.32.0(00001)
1-0:32.36.0(00000)
0-0:96.13.0()
1-0:32.7.0(236.0*V)
1-0:31.7.0(001*A)
1-0:21.7.0(00.105*kW)
1-0:22.7.0(00.000*kW)
0-1:24.1.0(003)
0-1:96.1.0(4730303732303033393333393331343139)
0-1:24.2.1(210301014500W)(01012.491*m3)
!B994

2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '172.30.32.3':5353 (socket 11): <DNSIncoming:{id=38172, flags=0, n_q=1, n_ans=0, n_auth=0, n_add=0, questions=[question[ptr,in,_services._dns-sd._udp.local.]], answers=[]}> (46 bytes) as [b'\x95\x1c\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] offsets = 0, 0, 0
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] lengths = 1, 0, 0
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] now offsets = 1, 0, 0
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Sending (75 bytes #1) <DNSOutgoing:{multicast=True, flags=33792, questions=[], answers=[(record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_home-assistant._tcp.local., 0)], authorities=[], additionals=[]}> as b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#'...
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (46 bytes) as [b'\x95\x1c\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '192.168.178.11':5353 (socket 11): <DNSIncoming:{id=0, flags=33792, n_q=0, n_ans=2, n_auth=0, n_add=0, questions=[], answers=[record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_hue._tcp.local., record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_hap._tcp.local.]}> (83 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x02\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x0c\x04_hue\x04_tcp\xc0#\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x07\x04_hap\xc09']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (83 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x02\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x0c\x04_hue\x04_tcp\xc0#\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x07\x04_hap\xc09']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '192.168.178.10':5353 (socket 11): <DNSIncoming:{id=0, flags=33792, n_q=0, n_ans=1, n_auth=0, n_add=0, questions=[], answers=[record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_home-assistant._tcp.local.]}> (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '172.30.32.3':5353 (socket 11): <DNSIncoming:{id=59282, flags=0, n_q=1, n_ans=0, n_auth=0, n_add=0, questions=[question[ptr,in,_services._dns-sd._udp.local.]], answers=[]}> (46 bytes) as [b'\xe7\x92\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] offsets = 0, 0, 0
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] lengths = 1, 0, 0
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] now offsets = 1, 0, 0
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Sending (75 bytes #1) <DNSOutgoing:{multicast=True, flags=33792, questions=[], answers=[(record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_home-assistant._tcp.local., 0)], authorities=[], additionals=[]}> as b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#'...
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (46 bytes) as [b'\xe7\x92\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '172.30.32.1':5353 (socket 11): <DNSIncoming:{id=0, flags=33792, n_q=0, n_ans=1, n_auth=0, n_add=0, questions=[], answers=[record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_home-assistant._tcp.local.]}> (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '127.0.0.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.17.0.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '127.0.0.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.17.0.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '192.168.178.10':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (75 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_home-assistant\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '192.168.178.16':5353 (socket 11): <DNSIncoming:{id=0, flags=33792, n_q=0, n_ans=5, n_auth=0, n_add=0, questions=[], answers=[record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_pdl-datastream._tcp.local., record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_printer._tcp.local., record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_ipp._tcp.local., record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_http._tcp.local., record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_privet._tcp.local.]}> (159 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x05\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_pdl-datastream\x04_tcp\xc0#\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x0b\x08_printer\xc0D\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x07\x04_ipp\xc0D\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x08\x05_http\xc0D\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\n\x07_privet\xc0D']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (159 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x05\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x17\x0f_pdl-datastream\x04_tcp\xc0#\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x0b\x08_printer\xc0D\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x07\x04_ipp\xc0D\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x08\x05_http\xc0D\xc0\x0c\x00\x0c\x00\x01\x00\x00\x11\x94\x00\n\x07_privet\xc0D']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Received from '192.168.178.12':5353 (socket 11): <DNSIncoming:{id=0, flags=33792, n_q=0, n_ans=1, n_auth=0, n_add=0, questions=[], answers=[record[ptr,in,_services._dns-sd._udp.local.]=4500/4499,_arlo-video._tcp.local.]}> (71 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x13\x0b_arlo-video\x04_tcp\xc0#']
2021-03-01 01:46:55 DEBUG (zeroconf-Engine-240) [zeroconf] Ignoring duplicate message received from '172.30.32.1':5353 (socket 11) (71 bytes) as [b'\x00\x00\x84\x00\x00\x00\x00\x01\x00\x00\x00\x00\t_services\x07_dns-sd\x04_udp\x05local\x00\x00\x0c\x00\x01\x00\x00\x11\x94\x00\x13\x0b_arlo-video\x04_tcp\xc0#']
2021-03-01 01:46:56 DEBUG (MainThread) [dsmr_parser.clients.protocol] received data: /Ene5\XS210 ESMR 5.0

1-3:0.2.8(50)
0-0:1.0.0(210301014751W)
0-0:96.1.1(4530303437303030303637333337363139)
1-0:1.8.1(002413.569*kWh)
1-0:1.8.2(001787.332*kWh)
1-0:2.8.1(000000.010*kWh)
1-0:2.8.2(000000.000*kWh)
0-0:96.14.0(0001)
1-0:1.7.0(00.105*kW)
1-0:2.7.0(00.000*kW)
0-0:96.7.21(00003)
0-0:96.7.9(00001)
1-0:99.97.0(0)(0-0:96.7.19)
1-0:32.32.0(00001)
1-0:32.36.0(00000)
0-0:96.13.0()
1-0:32.7.0(235.0*V)
1-0:31.7.0(001*A)
1-0:21.7.0(00.105*kW)
1-0:22.7.0(00.000*kW)
0-1
2021-03-01 01:46:56 DEBUG (MainThread) [dsmr_parser.clients.protocol] received data: :24.1.0(003)
0-1:96.1.0(4730303732303033393333393331343139)
0-1:24.2.1(210301014500W)(01012.491*m3)
!A112

2021-03-01 01:46:56 DEBUG (MainThread) [dsmr_parser

The DSMR line look completely normal to me, since the log is full of those (meter reading every second). The zeroconf-Engine-240 lines look suspicious to me, since they only appear here in the log, just before the crash. I do not know what they mean. Is there somebody who can read these and judge if they are related to the crash?

That looks like a lot of network traffic going on.
Something doesn’t seem to be set up right there.

Well,

Another HA having this issue here. Last night it happened again, only a hard restart brings it back to life again.
So, My hardware consists of a raspberry PI 3B (V1.2), a zzh (CC2652R Stick) zigbee stick and a 32 GB Micro SD card.
Integrations:
Agent DVR, 2x FRITZ!Box (router), HACS, Meteorologisk Institutt, Mobile App, MQTT mosquitto broker, Philips HUE, Raspberry pi power supply checker, IKEA, TUYA
Hacs integrations:
Afvalwijzer (waste scheduling), SONOFF LAN, Fritz!box tools, Yahoo finance.
Add-ons:
Check HA config, DuckDNS, file editor, mosquitto broker, Samba share, terminal @SSH, Wireguard, zigbee2MQTT.

So again one with the waste scheduling but that might be a coincidence.

Harmpert

I uninstalled my garbage_collection HACS integration, still had same issue…

That seems excessive, especially considering the quantity of data returned. Is it possible to slow it down?

It is excessive, but as far as I can see from the documentation it cannot be turned down. It’s local push and only the entity updates can be slowed down. (when I set it up that option was not yet available I believe and had to exclude the readings from the recorder and only include filtered entities to avoid flooding the database)
On the other hand, I do not think that this is causing the crashes, since I’ve been using for about a year without any problems.

1 Like

What is your network setup? Looks like you have at least three VLANs (or two, with very large subnets).

192.168.178
172.30
172.17

When things go offline, I always suspect network issues: duplicate IP address allocation, bad routing etc.