So, I moved home assistant over from a Raspberry Pi3 to docker on an Ubuntu 16.04.4 LTS machine. It’s incredibly easy to deploy home assistant or the new beta versions and I can say it’s a lot faster than it was on a Raspberry.
But… when I start home assistant it runs flawlessly for about a day and then suddenly the memory usages goes sky high. After restarting the docker container everything is fine again and then suddenly the same happens over and over again.
Does anybody experience the same?
Using docker version: 18.03.0-ce
Ubuntu Server: 16.04.4 LTS on a 2 core machine with 2.9 GB memory
Interesting. I have a similar setup (HA docker, MQTT docker, MariaDB docker) but do not have any memory leak issues. My HA docker container hovers around 150-190MB of memory usage.
The reason I asked if you used the built in MQTT broker, I experienced memory leak issues and poor performance with it. The memory usage would balloon to almost 800MB in just 6 days of uptime. Once I moved to an external MQTT (Mosquitto) broker, my memory leak issues went away.
I run HA in docker on Ubuntu 17.10 server (along with 12 other containers) without issue. Same is @squirtbrnr reported, < 200 MB memory usage (reported by Portainer). Just updated to docker 18.03.0-ce as well this morning, so I don’t have long-term stats on that.
Same issues… moved over from virtualenv to docker and mention after some time HA is unresponsive and the container uses 3/4GB RAM and a 100% CPU usage…
In your zwave options.xml file, change the debugging or logging option to false (don’t remember the actual name of the option). This will disable the zwave logging to its own log file but it’s worth a shot to see if that’s causing a memory leak. I forgot I had disabled that a while ago as well as a few other things to fix my memory leak issues.
I don’t think that is the case. I have ZWave logging set to true but it is limited only to alerts and errors. The size of the file is minimum and it barely updates.
I investigated a bit the issue. From what I see the logging (in my case a json file) of the container might be the problem. Sadly I couldn’t read the log file (6GB) to see what is wrong… however, I think it holds all the console output from HASS. Try disabling it or try to set a max limit on the file size.
Depending on your approach a container/PC restart resets the log and that gives the impression of randomness. Also maybe the container crashes (RAM full), deletes the log and restarts by itself (if you enabled the option).
The standard docker setup is to log any console output from the container to a json file. For a long-running program like HA that also outputs a lot of information to the console that file can get very large and unwieldy as you’ve seen. Unfortunately there’s no way to limit the amount of data that gets sent to the console from within HA without turning off logging altogether.
It might be a good idea to mention this in the official HASS docker installation documentation (and how to disable logging).
I am surprised more people didn’t face the problem yet (maybe they don’t have so many sensors or they have a lot of RAM).