[Solved] Moved over to docker, all well except huge memory leak

So, I moved home assistant over from a Raspberry Pi3 to docker on an Ubuntu 16.04.4 LTS machine. It’s incredibly easy to deploy home assistant or the new beta versions and I can say it’s a lot faster than it was on a Raspberry.
But… when I start home assistant it runs flawlessly for about a day and then suddenly the memory usages goes sky high. After restarting the docker container everything is fine again and then suddenly the same happens over and over again.

Does anybody experience the same?
Using docker version: 18.03.0-ce
Ubuntu Server: 16.04.4 LTS on a 2 core machine with 2.9 GB memory

1 Like

Do you use the internal MQTT broker?

I’m running everything in docker on a rpi2 and don’t experience such memory issues.

I have containers for Ha, appdaemon, mqtt,MySQL, grafana, influxdb and some other stuff.
Docker version is the latest stsble one.

I’m not using the integrated Mqtt if that matters.

@squirtbrnr No, I have MQTT on my Synology.

@jo-me I have a container for HA, Postgres, grafana, influxdb and also some other stuff.

You could set up a stripped down version of HA in a container alongside your current installation and see whether that one behaves identical.

Then add configuration stuff from your main HA instance until you find the one thats causing it.

The second instance could also monitor your first instance and restart it when its memory usage becomes critical.

Interesting. I have a similar setup (HA docker, MQTT docker, MariaDB docker) but do not have any memory leak issues. My HA docker container hovers around 150-190MB of memory usage.

The reason I asked if you used the built in MQTT broker, I experienced memory leak issues and poor performance with it. The memory usage would balloon to almost 800MB in just 6 days of uptime. Once I moved to an external MQTT (Mosquitto) broker, my memory leak issues went away.

I run HA in docker on Ubuntu 17.10 server (along with 12 other containers) without issue. Same is @squirtbrnr reported, < 200 MB memory usage (reported by Portainer). Just updated to docker 18.03.0-ce as well this morning, so I don’t have long-term stats on that.

You seeing anything spamming the HA logs?

Which docker image are you using?
I have same version setup and no memory link from HA.

I use the version of DockerHub.

Hey, did you manage to have homeassistant TTS speak in docker?

You can yet disable components until leak stop.

May not be leak but component using lats of memory. FFMPEG can use lots of memory(2mb+)

I am experiencing the same thing. HASS 0.66.1 (latest) running on a QNAP in the docker container station.

In 2-3 days the HASS container uses 3-4Gb (everything allocated and the CPU use increases dramatically).

I only use ZWave and a generic camera.

Short update. I disabled everything except ZWave and I am still experiencing the memory leak. Could it be Python itself?

Same issues… moved over from virtualenv to docker and mention after some time HA is unresponsive and the container uses 3/4GB RAM and a 100% CPU usage…

Running on an Intel NUC with 4 CPUs and 16GB RAM

OS: Alpine Linux v3.7
Docker: 17.12.1-ce

In your zwave options.xml file, change the debugging or logging option to false (don’t remember the actual name of the option). This will disable the zwave logging to its own log file but it’s worth a shot to see if that’s causing a memory leak. I forgot I had disabled that a while ago as well as a few other things to fix my memory leak issues.

I don’t think that is the case. I have ZWave logging set to true but it is limited only to alerts and errors. The size of the file is minimum and it barely updates.

I’m still encountering these memory leaks. Although it looks random when they appear.

I investigated a bit the issue. From what I see the logging (in my case a json file) of the container might be the problem. Sadly I couldn’t read the log file (6GB) to see what is wrong… however, I think it holds all the console output from HASS. Try disabling it or try to set a max limit on the file size.
Depending on your approach a container/PC restart resets the log and that gives the impression of randomness. Also maybe the container crashes (RAM full), deletes the log and restarts by itself (if you enabled the option).

The standard docker setup is to log any console output from the container to a json file. For a long-running program like HA that also outputs a lot of information to the console that file can get very large and unwieldy as you’ve seen. Unfortunately there’s no way to limit the amount of data that gets sent to the console from within HA without turning off logging altogether.

It might be a good idea to mention this in the official HASS docker installation documentation (and how to disable logging).
I am surprised more people didn’t face the problem yet (maybe they don’t have so many sensors or they have a lot of RAM).