[Solved] Moved over to docker, all well except huge memory leak

I’m still encountering these memory leaks. Although it looks random when they appear.

I investigated a bit the issue. From what I see the logging (in my case a json file) of the container might be the problem. Sadly I couldn’t read the log file (6GB) to see what is wrong… however, I think it holds all the console output from HASS. Try disabling it or try to set a max limit on the file size.
Depending on your approach a container/PC restart resets the log and that gives the impression of randomness. Also maybe the container crashes (RAM full), deletes the log and restarts by itself (if you enabled the option).

The standard docker setup is to log any console output from the container to a json file. For a long-running program like HA that also outputs a lot of information to the console that file can get very large and unwieldy as you’ve seen. Unfortunately there’s no way to limit the amount of data that gets sent to the console from within HA without turning off logging altogether.

It might be a good idea to mention this in the official HASS docker installation documentation (and how to disable logging).
I am surprised more people didn’t face the problem yet (maybe they don’t have so many sensors or they have a lot of RAM).

Docker container is up-and-running for two days and still no memory leak! :grinning: Keep my fingers crossed! :crossed_fingers:

I use this docker run command right now:

docker run -d --log-opt max-size=10m --log-opt max-file=3 --restart=unless-stopped --name="home-assistant" -v /docker/hass/config:/config -v /etc/localtime:/etc/localtime:ro --net=host homeassistant/home-assistant

Look at --log-opt max-size=10m --log-opt max-file=3 to keep the logging under control.

max-size
The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k, m, or g). Defaults to -1 (unlimited).
--log-opt max-size=10m

max-file
The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. Defaults to 1.
--log-opt max-file=3

5 Likes

This is probably my issue: Issue #9352.

Only when I click (to open) my Synolgy camera feed(s) on the frontend the memory starts leaking…

So was the logging part really the cause of your memory leak?

I checked out my rpi2 docker logs and found that the HA container has caused 2GB of logs so far. But wasted space does not equal memory leak.

Thanks for pointing out the --log-opt part, though. I’m a bit surprised that the default setting seems to be to keep everything forever.

Yes, the logging was the problem. The log-opt setting are a must for larger configurations.

This was indeed also my problem. But in conjunction with the docker logging mechanism.

1 Like

I’m guessing most people using docker are not leaving containers running for long periods of time without updating. For example, every time I update HA (which is ever other week following their release schedule), a fresh container is made and the old one destroyed. So at most, I have two weeks of docker logs (which if you don’t have something spamming an error or logs set to detail, isn’t very much)

I’ve just moved to a docker container and I’m having this issue. I’ve already used the log limiting lines listed above and the only camera I’m using is the one on my Skybell. After a few hours, container memory is showing around a GB. Next day it’s over 2GB. This morning it was sitting just over 3. Been running a few days…

Does that command hold across stop/starts of the container within Synology GUI? I was wondering if they could be added as environmental variables within the UI? Could you check to see if they are shown?

They don’t appear in the environment section that I can see. I’m still dealing with this even tough I start the container with the full command anytime I stop it. Going to have to dig into this further as it’s the only issue I have now with HA. When it ran on the RPi, I have to restart it every day or two to keep it stable. Now I can run a week or more. I restart it to do updates and changes so I really don’t know how long it will remain stable. I get up to 3 1/2GB ram usage or so and it seems to stay there…

1 Like

I have the same problem. You solved?
Hassio in docker
Raspberry pi3+
Thanks

Back in the time I used this solution. Now a days I use Proxmox on a NUC with HassOS as a VM.

I’m experiencing the same issue as well.

I installed Home Assistant in docker via IOTstack, so I’m not using a docker run command to start Home Assistant. How do I change the log options? Can I use portainer?

Sure. A screenshot from Portainer.

1 Like

Thank you very much. I’ll give it a shot and see if it fixes the memory leak. Fingers crossed

Tried this


And it seemed to be working. But after 20-22 hours the memory leak occured again :frowning:

I really can’t seem to find the root cause of this… Until then, I will try limiting the memory resources for the HA container. That way I hope it won’t be able to bring the whole system down:

I am ending with warning when deploying container that Docker can’t read property lenght and some other. Would you mind help me please?

image