Thanks, is it this average sensor?
After my post this morning I tried one more thing: I logged into the VM and did a āsudo apt-get updateā and āsudo apt-get upgradeā.
That pulled in an update for Docker.
I rebooted the VM one more time, and since then the memory has been much more stable, it hasnāt increased at all.
Your mileage may vary, but worth a tryā¦
Itās a bit of a cheeky request but I donāt suppose you would be able to point me int the right direction of getting CPU stats for each docker container as well as memory? Iāve got memory working great but CPU would finish it off. Iām not sure what the value template should beā¦
Why not run glances on your host and pull in all the data you can handle using the glances integration.
You need to put the URL in a web browser and you will see what value you need to use. I cant check this for a few days.
So I looked at doing that but canāt find a way to pull in the data I wanted for individual containers. They always seem to come in in a different order and I wasnāt dedicated enough to put in the time to tame the data when I was looking at this. Hadockermon is predictable with the data provided. I was also using that to make switches so I could easily restart containers.
I stand corrected. I just looked at the sensors the integration creates and realized that it only shows the total CPU usage.
I think it does split it up into containers as wellā¦ but I found container order in the json changed and didnāt have the time to play with it so as to reliably get the stats for each container in a predictable way I could read into a sensor. Iām sure it can be done I just didnāt have to work it out and using Philās HADockermon had other advantages as well (like switches etc)
(I wasnāt creating the sensors from an integration - I was looking at the json from a web request)
I also can see the same situation. It happens, in hassio, in home assistant supervised on generic linux (debian), in home assistant core (docker on debian too).
I am in all cases running on a vm on proxmox.
The memory footprint is always something like this:
Then when it goes above 90%, troubles are occurring
By looking at top output, most of the memory is used by buff/cache.
So if you run:
sync; echo 3 > /proc/sys/vm/drop_caches
Memory usage drops to the startup situationā¦
I have not seen any negative impact on HA in doing so. Not sure what is kept in the cache thoughā¦
GV
That is what I do via InfluxDB.
@DDK can i ask if your issue is still resolved (by running the update / upgrade and update docker?? or has it gone back to memory hungry state?
I am running ha as a vm on proxmox and have the same issue.
Hi Michael,
It has slowed down considerably, but not fully resolved.
Before I reached 90% within hours, now it is taking several days, but it is still creeping up.
I have found that the other solution mentioned above (clearing the cash) helps, but only temporarily- it just builds up after that. Note that I had to log in as root to do that command, sudo by itself didnāt allow it for some reason.
At least that prevents a full restart, so itās a quicker process.
As luck would have it, I have a few instances of HA (clones) on my proxmox server which i have been meaning to delete. Over the last couple of days, i booted up a few of these to see which ones were affected by the high memory usage issue we are experiencing. Long story short, the most recent unaffected instance i have is running Home Assistant Core 0.110.1 / Operating System 3.13 with the next most recent instance i have running Home Assistant Core 0.110.3 / Operating System 3.13 (i donāt have any in between), which is affected by the memory usage issue. So, the issue would have been introduced somewhere in either 0.110.2 or 0.110.3. I am going to run a couple more tests on some new clones i am making on my 0.110.1 instanceā¦
Does anyone know if there is an active bug on this issue?
I donāt think it is introduced that recently, given that this thread started in March, and the first poster had the issue in 0.107 already.
Reading through the thread it seems to be coming and going, e.g. seeming to be resolved in one version, and then return in the next.
There is an issue with the mobile app at the moment, which throws a very large number of errors in my log, maybe that is related? It is supposed to be resolved in version 0.111.0 that came out last night, but Iām always a bit hesitant about upgrading to a .0 version, so Iām waiting a couple of days before trying that.
Iām not sure if there is still a memory leak but without a doubt the recent releases require a lot mor RAM!
Iāve finally resolved my memory related crashing recently ā¦ My VM used to run wonderfully in v107 before updating to v110 and now v111. But with v110 it started crashing every several hours consistently ā I have a notification for every restart.
But I didnāt know why until several days of researching finally helped me find the python out-of-memory exceptions in the log showing that the process was being killed due to memory constraint.
I had never needed more than 300-400MB RAM allocation, so my VM was allocated 512MB without issue, but now Iāve seen the new versions jump to a little over 1GB (especially I when working the Lovelace UI editor), and has averaged 650-700MB consistently now (see chart below).
I thought it was a v110 issue so Iāve already updated to v111.4. The memory consumption still climbs up, but Iāve definitely seen it drop and recover some. Since Iāve updated my VM allocation now to 2GB my restarts have stopped completely. And you can see how there are memory recoveries in the chart below.
Just thought Iād share my recent memory issues in case this helps anyone else.
Are you using any addons? If so have a look at them first. The Glances addon is good for that.
Iām seeing a pretty constant overall use of 1.3GB overall and the Homeassistant container using about 1GB. Iāve been recording data for 30 days.
Same (or similar) issue here. The mem climes up and eventually freezes the RPI3B+ (nowadays takes roughly 1 day only!). I have a script that reboots the PI when mem is above 90% but this is not how it should be. Everything used to run fine until a week or so ago; the whole setup is kinda stable (config not changed, no new addons, etc.) so I guess it has to do with either updating the host SW, Hassio or one of the addons.
Iām not using many add-ons, just samba shares, configurator, and check HA config (for detecting breaking change issues to my config).
Otherwise I much rather run everything else as separate dockers on my Unraid server (MariaDB, InfluxDB, etc).
The memory has stabilized pretty well at ~680-700MB, with the DB now connecting to separate MariaDB docker server)ā¦ and not a single auto-reboot since increasing the allocation of my VM from 512MB up to 2GB (1GB initial).
If Iām using Lovelace editor a lot it will climb on up above 1GB, but eventually drops. Overall, HA just needs more RAM since 110.
The 1GB of RAM afforded on the R-Pi 3-b just wonāt cut it anymoreā¦this is yet another reason Iām super happy that I migrated from my dying RPi 3 over into a VM on Unraid!
Iām getting the same issue on 0.112.
It was never a problem, as it didnāt lock the thing up completely, but on this release it slams the CPU as well as the RAM (before it was generally just RAM), locking the VM up completely.
Iām running on proxmox with the guest agent enabled.
If you have cpu usage maxing out (90%+), please post a py-spy
recording with a 360 duration.
I havenāt noticed mine getting into 90%+, but it is definitely still all over the place and many automations and the dashboard in general have become unreliably slow or just donāt even happen at all. Iāve also just given up on logging into HAās dashboard for the most part, but I have noticed on several occasions that influx, grafana, or nodered are simply not running anymore and need to be started back up. So my graph below isnāt showing everything.
Itās gotten to the point that Iāve gone from showing off HA every chance I got, both in dashboard and demoing automations to people- to shying away from even mentioning using any automation in the house. Itās really frustrating and disenchanting.
Iām running on a Pi 3 B+ with Supervisor. How do I go about installing and running py-spy correctly in this setup?