Help: My HA is restaring, why?

Hello,

I have some trouble with HA for 3-4 days. HA is restarting by itself.
As far as I know, nothing changed during these 3-4 days.
Restarts happen once a day, in the morning (between 4am and 10am, so not fixed time).
Only HA itself is restarting, not the host.
CPU usage before restart is ok (17%)
But memory usage is quite huge just before restart occurs (~90%) instead of usual 60%
I don’t have recurring error or so in HA logs
No useful log from supervisor
From host log, I have this (first lines seems interesting): it seems host is killing python3 (because it consuming too much memory I believe).

[91095.514874] python3 cpuset=0845ea269bde145ed82cc68cc48882f1d87054a10baac0f0fa9352b978091e15 mems_allowed=0
[91095.523163] CPU: 2 PID: 3274 Comm: python3 Tainted: G         C        4.19.127-v8 #1
[91095.530725] Hardware name: Raspberry Pi 3 Model B Plus Rev 1.3 (DT)
[91095.534616] Call trace:
[91095.538397]  dump_backtrace+0x0/0x170
[91095.542243]  show_stack+0x24/0x30
[91095.545980]  dump_stack+0xa8/0xdc
[91095.549699]  dump_header+0x68/0x248
[91095.553292]  oom_kill_process+0xe4/0x350
[91095.556861]  out_of_memory+0xf4/0x2f0
[91095.560411]  __alloc_pages_nodemask+0x804/0xdd0
[91095.563926]  filemap_fault+0x408/0x5b0
[91095.567353]  ext4_filemap_fault+0x38/0x60
[91095.570711]  __do_fault+0x58/0x120
[91095.574020]  __handle_mm_fault+0x698/0xb70
[91095.577295]  handle_mm_fault+0x134/0x210
[91095.580466]  do_page_fault+0x150/0x4c0
[91095.583567]  do_translation_fault+0xa4/0xb4
[91095.586606]  do_mem_abort+0x68/0x110
[91095.589548]  do_el0_ia_bp_hardening+0x64/0xb0
[91095.592497]  el0_ia+0x1c/0x20
[91095.596281] Mem-Info:
[91095.599233] active_anon:85876 inactive_anon:87443 isolated_anon:0
[91095.599233]  active_file:207 inactive_file:438 isolated_file:0
[91095.599233]  unevictable:0 dirty:0 writeback:0 unstable:0
[91095.599233]  slab_reclaimable:13453 slab_unreclaimable:21454
[91095.599233]  mapped:178 shmem:4 pagetables:3904 bounce:0
[91095.599233]  free:957 free_pcp:71 free_cma:35
[91095.615995] Node 0 active_anon:343504kB inactive_anon:349772kB active_file:828kB inactive_file:1668kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:612kB dirty:0kB writeback:0kB shmem:16kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[91095.626665] DMA32 free:4072kB min:3848kB low:4808kB high:5768kB active_anon:343504kB inactive_anon:349772kB active_file:916kB inactive_file:1496kB unevictable:0kB writepending:0kB present:970752kB managed:937988kB mlocked:0kB kernel_stack:10160kB pagetables:15616kB bounce:0kB free_pcp:188kB local_pcp:0kB free_cma:140kB
[91095.638578] lowmem_reserve[]: 0 0 0
[91095.641765] DMA32: 227*4kB (UMEHC) 38*8kB (MEH) 53*16kB (UMEH) 26*32kB (UH) 6*64kB (H) 6*128kB (H) 1*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 4300kB
[91095.648209] 1393 total pagecache pages
[91095.651920] 731 pages in swap cache
[91095.655234] Swap cache stats: add 92658, delete 91926, find 806237/831869
[91095.658911] Free swap  = 0kB
[91095.662534] Total swap = 234496kB
[91095.666523] 242688 pages RAM
[91095.670305] 0 pages HighMem/MovableOnly
[91095.674061] 8191 pages reserved
[91095.678237] 2048 pages cma reserved
[91095.682084] Tasks state (memory values in pages):
[91095.685834] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[91095.693711] [    108]     0   108    50038      162   323584        4          -250 systemd-journal
[91095.701958] [    126]     0   126     2374       85    49152      171         -1000 systemd-udevd
[91095.710247] [    242]  1008   242    20435       78    53248       29             0 systemd-timesyn
[91095.718358] [    247]     0   247      576       16    32768        0             0 avahi-dnsconfd
[91095.726378] [    249]  1002   249     1394      169    40960       42          -900 dbus-daemon
[91095.734605] [    254]     0   254   165187      547   135168      190             0 NetworkManager
[91095.742700] [    262]     0   262    58282       24    69632      220             0 rauc
[91095.750430] [    267]     0   267    74587       23    65536        0             0 rngd
[91095.758240] [    269]     0   269     2362       92    45056       83             0 wpa_supplicant
[91095.770982] [    300]     0   300      501       28    32768        0             0 hciattach
[91095.779581] [    303]     0   303     1791       90    40960        0             0 bluetoothd
[91095.787757] [    358]  1000   358     1492      119    45056       48             0 avahi-daemon
[91095.796358] [    359]     0   359   574158     4286   454656     1035          -500 dockerd
[91095.804864] [    360]  1000   360     1388        8    40960       59             0 avahi-daemon
[91095.813164] [    369]     0   369   567989     1971   368640      528          -500 containerd
[91095.822051] [   1008]     0  1008   155345        0   114688      148          -500 docker-proxy
[91095.830593] [   1016]     0  1016   193224      179   135168      106          -999 containerd-shim
[91095.839517] [   1033]     0  1033       48        0    28672        4             0 s6-svscan
[91095.871012] [   1130]     0  1130       48        0    28672        4             0 s6-supervise
[91095.879673] [   1280]     0  1280       48        0    28672        3             0 s6-supervise
[91095.888487] [   1283]     0  1283   177985       19    81920      644             0 observer
[91095.897059] [   1312]     0  1312   221345     1113   221184      398             0 docker
[91095.905814] [   1314]     0  1314      519       23    32768        1             0 agetty
[91095.914493] [   1379]     0  1379   193224      166   131072       94          -999 containerd-shim
[91095.923557] [   1394]     0  1394       48        0    28672        4             0 s6-svscan
[91095.932362] [   1477]     0  1477       48        0    28672        4             0 s6-supervise
[91095.941098] [   1646]     0  1646     1161        2    36864      170         -1000 udevd
[91095.949604] [   1697]     0  1697       48        0    28672        3             0 s6-supervise
[91095.958094] [   1698]     0  1698       48        0    28672        3             0 s6-supervise
[91095.966570] [   1702]     0  1702    46809    11127   356352     1531             0 python3
[91095.975060] [   1703]     0  1703     1098      114    36864      435             0 bash
[91096.001102] [   1843]     0  1843   193224      163   135168       96          -999 containerd-shim
[91096.009779] [   1858]     0  1858       47        0    28672        4             0 s6-svscan
[91096.018487] [   1939]     0  1939       44        0    16384        6             0 foreground
[91096.026841] [   1940]     0  1940       48        0    28672        3             0 s6-supervise
[91096.035385] [   1951]     0  1951       43        0    16384        4             0 foreground
[91096.043573] [   2084]     0  2084   185899     1605   167936      163             0 coredns
[91096.053961] [   2097]     0  2097   193224      173   139264      100          -999 containerd-shim
[91096.070391] [   2112]     0  2112       48        0    28672        4             0 s6-svscan
[91096.085973] [   2195]     0  2195       48        0    28672        3             0 s6-supervise
[91096.099886] [   2414]     0  2414   193224      182   135168      104          -999 containerd-shim
[91096.108650] [   2491]     0  2491       48        1    28672        0             0 s6-svscan
[91096.117534] [   2638]     0  2638       44        2    16384        0             0 foreground
[91096.125917] [   2639]     0  2639       48        1    28672        0             0 s6-supervise
[91096.133903] [   2652]     0  2652       43        1    16384        0             0 foreground
[91096.142401] [   2916]     0  2916      407        1    32768        0             0 sleep
[91096.150780] [   2960]     0  2960   211657      167   143360      111          -999 containerd-shim
[91096.158873] [   2977]     0  2977       48        1    28672        0             0 s6-svscan
[91096.167351] [   2982]     0  2982     4265        4    57344      226         -1000 udevd
[91096.176276] [   3022]     0  3022       48        0    28672        3             0 s6-supervise
[91096.184468] [   3024]     0  3024    23543      157    86016      547             0 pulseaudio
[91096.192515] [   3071]     0  3071       48        1    28672        0             0 s6-supervise
[91096.200650] [   3250]     0  3250       48        1    28672        0             0 s6-supervise
[91096.208802] [   3253]     0  3253      212       11    32768        0             0 mdns-repeater
[91096.217080] [   3297]     0  3297   155345        0   110592      139          -500 docker-proxy
[91096.225583] [   3311]     0  3311   155345        0   106496      144          -500 docker-proxy
[91096.234103] [   3325]     0  3325   136912        0    98304      157          -500 docker-proxy
[91096.242602] [   3338]     0  3338   136912        0   106496      145          -500 docker-proxy
[91096.251053] [   3345]     0  3345   193224      173   135168       97          -999 containerd-shim
[91096.319810] [   3361]     0  3361      196        5    28672        4             0 docker-init
[91096.328366] [   3417]     0  3417     1060        2    45056      509             0 bash
[91096.337157] [   3438]     0  3438   193224      193   139264       98          -999 containerd-shim
[91096.345861] [   3456]     0  3456       48        0    28672        5             0 s6-svscan
[91096.355714] [   3497]     0  3497       48        0    28672        3             0 s6-supervise
[91096.364259] [   3690]     0  3690   155697        0   118784      150          -500 docker-proxy
[91096.372721] [   3699]     0  3699   211657      153   147456      112          -999 containerd-shim
[91096.381376] [   3715]     0  3715       48        0    28672        4             0 s6-svscan
[91096.390037] [   3809]     0  3809       48        0    28672        3             0 s6-supervise
[91096.398785] [   3947]     0  3947     1204       16    32768       78             0 socat
[91096.407368] [   3948]     0  3948    10293      479   114688     8610             0 mosquitto
[91096.416181] [   4051]     0  4051   211657      163   143360       89          -999 containerd-shim
[91096.424625] [   4066]     0  4066      196        4    28672        5             0 docker-init
[91096.433343] [   4290]     0  4290       48        0    28672        4             0 s6-svscan
[91096.441890] [   4331]     0  4331       44        0    16384        6             0 foreground
[91096.450403] [   4332]     0  4332       48        0    28672        4             0 s6-supervise
[91096.458945] [   4343]     0  4343       43        0    16384        4             0 foreground
[91096.467350] [   4500]     0  4500     1093      106    45056      438             0 bash
[91096.475685] [   4515]     0  4515   173842       20   118784      124          -500 docker-proxy
[91096.485596] [   4526]     0  4526   193224      204   135168       98          -999 containerd-shim
[91096.504262] [   4544]     0  4544      196        4    28672        6             0 docker-init
[91096.512875] [   4661]     0  4661       48        0    28672        4             0 s6-svscan
[91096.521284] [   4700]     0  4700       44        0    16384        6             0 foreground
[91096.529659] [   4704]     0  4704       48        0    28672        3             0 s6-supervise
[91096.537837] [   4718]     0  4718       43        0    16384        4             0 foreground
[91096.546206] [   4781]     0  4781       48        0    28672        3             0 s6-supervise
[91096.554531] [   4782]     0  4782       48        0    28672        3             0 s6-supervise
[91096.562766] [   4789]     0  4789     5239        9    73728     4169             0 ttyd
[91096.571126] [   4792]     0  4792     1065       72    40960       58             0 sshd
[91096.579491] [   4945]     0  4945      409        0    32768       15             0 run.sh
[91096.587763] [   4999]     0  4999   211220      363   151552       98          -500 docker-proxy
[91096.596094] [   5013]     0  5013   193224      211   135168       95          -999 containerd-shim
[91096.604556] [   5037]     0  5037       48        0    28672        5             0 s6-svscan
[91096.613891] [   5152]     0  5152       48        0    28672        3             0 s6-supervise
[91096.622850] [   5153]     0  5153       48        0    28672        3             0 s6-supervise
[91096.631195] [   5154]     0  5154     6940       83    86016      324             0 nmbd
[91096.639448] [   5156]     0  5156    10638      121   110592      426             0 smbd
[91096.647777] [   5262]     0  5262    10139      140   102400      377             0 smbd-notifyd
[91096.656663] [   5266]     0  5266    10141      103   102400      415             0 cleanupd
[91096.665257] [   5268]     0  5268       48        0    28672        4             0 s6-supervise
[91096.673876] [   5323]     0  5323     6976      208    86016     3752             0 gunicorn
[91096.682767] [   5670]     0  5670   211657      194   143360      103          -999 containerd-shim
[91096.691570] [   5687]     0  5687       48        0    28672        5             0 s6-svscan
[91096.700097] [   5706]     0  5706    16074     5970   159744     6066             0 gunicorn
[91096.708616] [   5726]     0  5726       48        0    28672        3             0 s6-supervise
[91096.717182] [   5727]     0  5727       48        4    28672        0             0 s6-supervise
[91096.725593] [   5730]     0  5730    94064       66    86016       67             0 adb
[91096.734292] [   5790]     0  5790       48        0    28672        3             0 s6-supervise
[91096.742792] [   5999]     0  5999     3289        2    32768      230         -1000 udevd
[91096.751516] [   6173]     0  6173       48        0    28672        3             0 s6-supervise
[91096.760249] [   6177]     0  6177   949673   120337  7041024    17469             0 python3
[91096.768951] [   7518]     0  7518   212009      169   143360      113          -999 containerd-shim
[91096.777792] [   7587]     0  7587   155345        0   106496      164          -500 docker-proxy
[91096.786704] [   7612]     0  7612   193576      212   139264       95          -999 containerd-shim
[91096.795429] [   7653]     0  7653      196        4    28672        6             0 docker-init
[91096.804173] [   7659]     0  7659       48        0    28672        5             0 s6-svscan
[91096.813063] [   7961]     0  7961       48        0    28672        4             0 s6-svscan
[91096.821562] [   8033]     0  8033       44        0    16384        6             0 foreground
[91096.830259] [   8036]     0  8036       48        0    28672        3             0 s6-supervise
[91096.838890] [   8035]     0  8035       48        0    28672        3             0 s6-supervise
[91096.847540] [   8068]     0  8068       43        0    16384        4             0 foreground
[91096.855931] [   8384]     0  8384      603        2    40960       48             0 run.sh
[91096.864387] [   8404]     0  8404       48        0    28672        3             0 s6-supervise
[91096.873250] [   8407]     0  8407     6893       85    86016     4410             0 hass-configurat
[91096.888947] [   8919]     0  8919    68851     5680   487424      744             0 node
[91096.897167] [   8933]     0  8933    66780     3706   438272      743             0 node
[91096.905795] [   8953]     0  8953    72325     8472   663552     1529             0 node
[91096.914190] [  29447]     0 29447     4290        2    61440      122             0 git
[91096.922727] [  81697]     0 81697    10723      264   110592      301             0 smbd
[91096.931780] [  93140]     0 93140      407        1    28672        0             0 sleep
[91096.940526] [  93206]     0 93206      405        1    24576        0             0 sleep
[91096.949182] [  93262]     0 93262    10638      144    94208      405             0 smbd
[91096.957823] [  93263]     0 93263     6940       98    81920      309             0 nmbd
[91096.966786] [  93266]     0 93266       46        5    20480        0             0 justc-envdir
[91096.975793] Out of memory: Kill process 6177 (python3) score 475 or sacrifice child
[91096.985885] Killed process 6177 (python3) total-vm:3798692kB, anon-rss:481348kB, file-rss:0kB, shmem-rss:0kB
[91097.421744] oom_reaper: reaped process 6177 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[91102.649122] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready
[91102.657312] brcmfmac: brcmf_cfg80211_set_power_mgmt: power save enabled
[91311.667153] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 5
[91311.680173] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91313.675261] kauditd_printk_skb: 50 callbacks suppressed
[91313.675274] audit: type=1325 audit(1602661908.929:268): table=nat family=2 entries=31
[91313.696861] audit: type=1300 audit(1602661908.929:268): arch=c00000b7 syscall=208 success=yes exit=0 a0=4 a1=0 a2=40 a3=b0fe760 items=0 ppid=359 pid=95567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj==unconfined key=(null)
[91313.726745] audit: type=1327 audit(1602661908.929:268): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4400444F434B4552002D7000746370002D6400302F30002D2D64706F72740038343835002D6A00444E4154002D2D746F2D64657374696E6174696F6E003137322E33302E33332E363A383438350000002D690068617373696F
[91313.751787] audit: type=1325 audit(1602661908.993:269): table=filter family=2 entries=38
[91313.763629] audit: type=1300 audit(1602661908.993:269): arch=c00000b7 syscall=208 success=yes exit=0 a0=4 a1=0 a2=40 a3=2d1d1810 items=0 ppid=359 pid=95572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj==unconfined key=(null)
[91313.791256] audit: type=1327 audit(1602661908.993:269): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4400444F434B45520000002D690068617373696F002D6F0068617373696F002D7000746370002D64003137322E33302E33332E36002D2D64706F72740038343835002D6A00414343455054
[91313.815214] audit: type=1325 audit(1602661909.069:270): table=nat family=2 entries=30
[91313.825964] audit: type=1300 audit(1602661909.069:270): arch=c00000b7 syscall=208 success=yes exit=0 a0=4 a1=0 a2=40 a3=3f9ed3d0 items=0 ppid=359 pid=95578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj==unconfined key=(null)
[91313.852519] audit: type=1327 audit(1602661909.069:270): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4400504F5354524F5554494E47002D7000746370002D73003137322E33302E33332E36002D64003137322E33302E33332E36002D2D64706F72740038343835002D6A004D415351554552414445
[91313.968660] hassio: port 12(veth2376f12) entered disabled state
[91313.975543] veth2703128: renamed from eth0
[91314.105368] hassio: port 12(veth2376f12) entered disabled state
[91314.123004] device veth2376f12 left promiscuous mode
[91314.128659] hassio: port 12(veth2376f12) entered disabled state
[91314.128739] audit: type=1700 audit(1602661909.318:271): dev=veth2376f12 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
[91320.292887] hassio: port 12(veth5b1f017) entered blocking state
[91320.308437] hassio: port 12(veth5b1f017) entered disabled state
[91320.319789] device veth5b1f017 entered promiscuous mode
[91320.325498] kauditd_printk_skb: 2 callbacks suppressed
[91320.325512] audit: type=1700 audit(1602661915.547:272): dev=veth5b1f017 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295
[91320.345167] audit: type=1300 audit(1602661915.547:272): arch=c00000b7 syscall=206 success=yes exit=40 a0=c a1=400124e510 a2=28 a3=0 items=0 ppid=1 pid=359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="dockerd" exe="/usr/bin/dockerd" subj==unconfined key=(null)
[91320.362514] IPv6: ADDRCONF(NETDEV_UP): veth5b1f017: link is not ready
[91320.365579] audit: type=1327 audit(1602661915.547:272): proctitle=2F7573722F62696E2F646F636B657264002D480066643A2F2F002D2D73746F726167652D6472697665723D6F7665726C617932002D2D6C6F672D6472697665723D6A6F75726E616C64002D2D646174612D726F6F74002F6D6E742F646174612F646F636B6572
[91320.563584] audit: type=1325 audit(1602661915.818:273): table=nat family=2 entries=29
[91320.574812] audit: type=1300 audit(1602661915.818:273): arch=c00000b7 syscall=208 success=yes exit=0 a0=4 a1=0 a2=40 a3=22f5c240 items=0 ppid=359 pid=95934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj==unconfined key=(null)
[91320.602224] audit: type=1327 audit(1602661915.818:273): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100444F434B4552002D7000746370002D6400302F30002D2D64706F72740038343835002D6A00444E4154002D2D746F2D64657374696E6174696F6E003137322E33302E33332E363A383438350000002D690068617373696F
[91320.644728] audit: type=1325 audit(1602661915.899:274): table=filter family=2 entries=37
[91320.655320] audit: type=1300 audit(1602661915.899:274): arch=c00000b7 syscall=208 success=yes exit=0 a0=4 a1=0 a2=40 a3=1f589560 items=0 ppid=359 pid=95939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-legacy-multi" subj==unconfined key=(null)
[91320.682184] audit: type=1327 audit(1602661915.899:274): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D690068617373696F002D6F0068617373696F002D7000746370002D64003137322E33302E33332E36002D2D64706F72740038343835002D6A00414343455054
[91320.735309] audit: type=1325 audit(1602661915.989:275): table=nat family=2 entries=30
[91324.302397] eth0: renamed from veth7347959
[91324.333965] IPv6: ADDRCONF(NETDEV_CHANGE): veth5b1f017: link becomes ready
[91324.340468] hassio: port 12(veth5b1f017) entered blocking state
[91324.345954] hassio: port 12(veth5b1f017) entered forwarding state
[91418.629475] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready
[91418.634317] brcmfmac: brcmf_cfg80211_set_power_mgmt: power save enabled
[91729.716215] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 1
[91729.726242] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 3
[91729.736050] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91729.747147] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91729.756960] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 0
[91729.766988] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 5
[91729.776582] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 1
[91729.786608] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 4
[91729.796424] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91729.806236] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 0
[91729.816475] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 1
[91729.826715] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91729.837169] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 0
[91729.848065] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91729.859792] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 2
[91729.871098] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 0
[91729.882400] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 1
[91729.893926] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 3
[91729.904586] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 7
[91729.915470] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 1
[91729.926562] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 6
[91729.938067] usb 1-1.1.2: USB disconnect, device number 6
[91729.944378] cdc_acm 1-1.1.2:1.0: failed to set dtr/rts
[91729.951312] WARN::dwc_otg_hcd_urb_dequeue:638: Timed out waiting for FSM NP transfer to complete on 0
[91730.153662] usb 1-1.1.2: new full-speed USB device number 7 using dwc_otg
[91730.252668] usb 1-1.1.2: New USB device found, idVendor=1cf1, idProduct=0030, bcdDevice= 1.00
[91730.262132] usb 1-1.1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[91730.272150] usb 1-1.1.2: Product: ConBee II
[91730.277240] usb 1-1.1.2: Manufacturer: dresden elektronik ingenieurtechnik GmbH
[91730.282235] usb 1-1.1.2: SerialNumber: DE2195889
[91730.291079] cdc_acm 1-1.1.2:1.0: ttyACM0: USB ACM device
[91733.735850] usb 1-1.1.2: USB disconnect, device number 7
[91733.931682] usb 1-1.1.2: new full-speed USB device number 8 using dwc_otg
[91734.031541] usb 1-1.1.2: New USB device found, idVendor=1cf1, idProduct=0030, bcdDevice= 1.00
[91734.040877] usb 1-1.1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[91734.050448] usb 1-1.1.2: Product: ConBee II
[91734.055478] usb 1-1.1.2: Manufacturer: dresden elektronik ingenieurtechnik GmbH
[91734.060673] usb 1-1.1.2: SerialNumber: DE2195889
[91734.070646] cdc_acm 1-1.1.2:1.0: ttyACM0: USB ACM device

Here is the details of my memory monitoring:
memory

How can I know what is consuming so much memory ?

Details:

  • raspberry pi 3b (HassOS 4.13)
  • Supervisor 247
  • HA 0.116.2

Thanks !

Thomas

I have the same problem.
Just updated RPI3b to latest Supervisor and HA.
I keep running out of memory and HA restarting.

After looking through community, I disabled all addons that wasn’t really using, and after the restart of the system, now it seems that I have more free memory.
Will see if it keeps high enough time.

Captura

Are you using the ONVIF integration?

I had 8 cameras set up with ONVIF and after moving to the latest beta I found that Python was eating up all the memory on my machine before being stopped (say over 30 minutes.)

I removed things piece by piece until I worked out what was doing it, removed the ONVIF integration (I didn’t need it) and all is well.

If anyone knows where the beta forum is I should mention it there of course :slight_smile:

Kind thoughts,

Andrew

It’s here: https://discord.gg/Sd7QkZ

I’m not using it

I am having the same issue in past week or so. I’ll try to remove ONVIF and setup cameras manually to see if helps. I am rebooting randomly and RAM use is way higher. thanks for the clue

Actually my issue is getting worse. Now, hassOs itself is crashing. I have to physically power off/on my raspberry to restore HA.

So now, I’m looking for a solution to

  • have supervisor log in debug level
  • save dmesg upon restart

I’m already saving the home-assistant.log using this:
tail -f home-assistant.log | tee ./logs/home-assistant-persisted.log &> /dev/null &

Do you have any suggestion ?

Sorry for the double post. Actually I found that: https://developers.home-assistant.io/docs/operating-system/debugging/

Which allows you to access to the host itself and then you can check much more things