I have used Home Assistant to replace OpenHAB and HomeBridge since the beginning of May. I’m very happy with the switch and I got much more functionalities and usability. However one thing that I keep fight for is memory consumption, initially it stays around 4-5GB but after running for a couple of days it hits higher limit (I assigned 7GB for the VM), and randomly the UI disconnects and restarts. Today when I check the console from VM I got the below error, which is beyond my IT knowledge, does anybody know it means?
I have in total 162 devices, around 1000 entities, 40 helper and around 50 automation rules. I read here usually HA can run on RPI with 4GB memory, is my setup too big to run with 7GB memory?
Since the memory allocation happens dynamically and it’s hard to get more insight. Appreciate any suggestion, thank you in advance!
Start in safe mode. That will disable all 3rd party integrations and cards. If the memory stops climbing you know it is one of them that has a memory leak.
Also check your automations and scripts for loops that never exit.
Thank you! I have double checked and there is no infinite loops, the thing is the memory usage is already very high when it just get started (around 6GB). Is it expected with the amount of devices/entities I have? I wish I could assign more ram but my NAS only supports 16GB
Something is eating up your memory. All my HA installs with several addons and integrations are setup on VM’s with 4GB of memory. My home runs 1,4GB and office 1,3GB and test is at 2GB
Are you seeing this RAM usage from HA itself or from the VM?
Thank you all for your replies! I have tried to disable/enable add-ons and integrations one by one, and I found it’s dreame integration brings up the memory significantly (it uses 2.5GB). All add-ons use less than 400MB.
Studio Code Server appears to use very little memory at first, but as you use it, it takes more, particularly if you have several tabs open. When a tab is closed, memory is not released and over time it will consume nearly all of it. It’s a feature.
I check in HA settings/hardware - it reflect immediately when I turn on/off an integration. The memory in VM stays the same at near max with no change, I read somewhere it’s a feature of Linux to use as much memory as possible.
Why? EXS shows both allocated and active memory and depending on vartious task performed by HA VM I can see how active allocation changes. What might be wrong with hypervisor reporting?
That’s what i did and there’s no problems. If you find any other addon who’s eating memory add it to above automation. If i remember correctly for example Firefox addon eats memory, too. Not as agressive as VSCode, but it does.