I’m not seeing any memory issues. I was having issues but generally not with HA containers. Vscode was a problem though and I have a button to restart that one. I have also had issues in the past with SABNZBD, sonarr and radar and I have an automation to kick those if the memeory spikes above rp% for an hour.
I have notice a memory issue as well. It was really apparent pre .108 and seems to be happening again in .109. I am running in python venv as another user. I am not using a different database. I do have 184 Modbus sensors connected right now but that has not always been the case. Previously, I noticed that the History View increased the Memory usage which may be normal, but the problem is that it never went back down. I am downloading .110-b3 now.
I am seeing very similar patterns. I have a NUC with Proxmox and the memory is going from 25% to 97% over the period of a day. The biggest step seems to be when my system auto snap shots at 0230 in the morning. It jumps over 1GB then settles anc continues a gradual increase. I have a lot of plug ins but I have definitely noticed it since i moved to mariaDB. Could this be anything to do with it? I am now also having problems getting my history page to load, it takes forever and then hangs as it doesnt seem to be able to process the data. my MariaDB is about 2Gb.
I am still using 0.109.6
any ideas greatly received
As a counter data point (just to clarify that it’s not a systemic issue), I’m not seeing any memory issues:
That’s a Banana Pi running 0.109.6 in a virtualenv on top of Armbian Buster, with the standard sqlite3.
The only memory issues I have experienced were with ssh connections into my Unifi APs, which would occasionally consume all memory. I’ve resolved this now.
Just updated to 0.110. I don’t see any mention of memory leaks being fixed in the release notes or comments, so I’m not sure how this is going to go.
As an update, although not consistent, I do not see a continual increase in the use of memory over a longer period,
For swap the solution was reasonably simple, edit /etc/dphys-swapfile
and comment out the fixed size of memory. Then reboot. My system is a Raspbian based system running on an SSD, this is not advisable on an SD Card based system.
I’m on 0.110.5 (the newest release at the moment) and I still see this issue.
I have a similar setup as others in this thread; A NUC running proxmox, with a VM dedicated to HA.
This VM has 8gb dedicated to it, and it regularly goes to 95% usage. A restart of HA doesn’t seem to have an effect, I need to reboot the whole VM to get it back to a normal usage, but then it keeps going up again.
It’s impacting the speed of the system, with lights that were normally instant now taking several seconds to respond.
The VM is dedicated to HA, with the only other process on it being Avahi.
I’m running Home Assistant in docker on it, with several add ons (Mosquitto, NR, Samba, Z2M, visual studio code, esphome).
I’m also running HACS with a small number of integrations (browser_mod, garbage collection, spotify start) and some custom cards for Lovelace
Opening NodeRed takes several seconds of waiting time before the screen starts to build.
I looked at TOP in the VM and notice that NR takes 25% of the memory, but I don’t see any other processes that take high amounts of memory.
I’m not using InfluxDB nor MariaDB
VSCode can be pretty brutal on memory I have found… but MariaDB (which I note you are not using) is even worse.
Thanks for the tip.
I hardly ever use vscode, so I will deactivate that and see what happens.
VSCode doesn’t seem to be it; I removed it, rebooted the VM, and I literally see the memory rising; it started at 23%, 10 minutes later i’m at 29% and keeps going up.
I use hadockermon to read the memory usage of all containers and have a lovelace card displaying memory and averages for all containers.
How are you getting the memory stats into home assistant? As Rest sensors? Would you be able to share one of the sensors as I’m having trouble with this?
Yes rest sensors. Check my github repo. Sysmonitor package in there.
Thanks! Do you have a link? I can’t see it in your profile?
From about line 77
It uses Phil Hawthorne’s HAdockermon and a custom average component.
Thanks, is it this average sensor?
After my post this morning I tried one more thing: I logged into the VM and did a ‘sudo apt-get update’ and ‘sudo apt-get upgrade’.
That pulled in an update for Docker.
I rebooted the VM one more time, and since then the memory has been much more stable, it hasn’t increased at all.
Your mileage may vary, but worth a try…
It’s a bit of a cheeky request but I don’t suppose you would be able to point me int the right direction of getting CPU stats for each docker container as well as memory? I’ve got memory working great but CPU would finish it off. I’m not sure what the value template should be…
Why not run glances on your host and pull in all the data you can handle using the glances integration.
You need to put the URL in a web browser and you will see what value you need to use. I cant check this for a few days.
So I looked at doing that but can’t find a way to pull in the data I wanted for individual containers. They always seem to come in in a different order and I wasn’t dedicated enough to put in the time to tame the data when I was looking at this. Hadockermon is predictable with the data provided. I was also using that to make switches so I could easily restart containers.