[Solved] Moved over to docker, all well except huge memory leak

Docker container is up-and-running for two days and still no memory leak! :grinning: Keep my fingers crossed! :crossed_fingers:

I use this docker run command right now:

docker run -d --log-opt max-size=10m --log-opt max-file=3 --restart=unless-stopped --name="home-assistant" -v /docker/hass/config:/config -v /etc/localtime:/etc/localtime:ro --net=host homeassistant/home-assistant

Look at --log-opt max-size=10m --log-opt max-file=3 to keep the logging under control.

max-size
The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k, m, or g). Defaults to -1 (unlimited).
--log-opt max-size=10m

max-file
The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. Defaults to 1.
--log-opt max-file=3

5 Likes

This is probably my issue: Issue #9352.

Only when I click (to open) my Synolgy camera feed(s) on the frontend the memory starts leaking…

So was the logging part really the cause of your memory leak?

I checked out my rpi2 docker logs and found that the HA container has caused 2GB of logs so far. But wasted space does not equal memory leak.

Thanks for pointing out the --log-opt part, though. I’m a bit surprised that the default setting seems to be to keep everything forever.

Yes, the logging was the problem. The log-opt setting are a must for larger configurations.

This was indeed also my problem. But in conjunction with the docker logging mechanism.

1 Like

I’m guessing most people using docker are not leaving containers running for long periods of time without updating. For example, every time I update HA (which is ever other week following their release schedule), a fresh container is made and the old one destroyed. So at most, I have two weeks of docker logs (which if you don’t have something spamming an error or logs set to detail, isn’t very much)

I’ve just moved to a docker container and I’m having this issue. I’ve already used the log limiting lines listed above and the only camera I’m using is the one on my Skybell. After a few hours, container memory is showing around a GB. Next day it’s over 2GB. This morning it was sitting just over 3. Been running a few days…

Does that command hold across stop/starts of the container within Synology GUI? I was wondering if they could be added as environmental variables within the UI? Could you check to see if they are shown?

They don’t appear in the environment section that I can see. I’m still dealing with this even tough I start the container with the full command anytime I stop it. Going to have to dig into this further as it’s the only issue I have now with HA. When it ran on the RPi, I have to restart it every day or two to keep it stable. Now I can run a week or more. I restart it to do updates and changes so I really don’t know how long it will remain stable. I get up to 3 1/2GB ram usage or so and it seems to stay there…

1 Like

I have the same problem. You solved?
Hassio in docker
Raspberry pi3+
Thanks

Back in the time I used this solution. Now a days I use Proxmox on a NUC with HassOS as a VM.

I’m experiencing the same issue as well.

I installed Home Assistant in docker via IOTstack, so I’m not using a docker run command to start Home Assistant. How do I change the log options? Can I use portainer?

Sure. A screenshot from Portainer.

1 Like

Thank you very much. I’ll give it a shot and see if it fixes the memory leak. Fingers crossed

Tried this


And it seemed to be working. But after 20-22 hours the memory leak occured again :frowning:

I really can’t seem to find the root cause of this… Until then, I will try limiting the memory resources for the HA container. That way I hope it won’t be able to bring the whole system down:

I am ending with warning when deploying container that Docker can’t read property lenght and some other. Would you mind help me please?

image

This is solved by installing portainer not as Hassio addon. However If I change the memory seting and deplay container the settings are again seat without limits. Any idea how to prevent that?

Please provide more details regarding your installation method and which installation version are you using.

Something like this should be sufficient (sudo is your choice):

sudo docker run --name Home-Assistant --network=host --privileged --log-driver json-file --log-opt max-size=5m --log-opt max-file=3 --restart always -v /folder/:/config -e TZ=Europe/Amsterdam homeassistant/home-assistant:0.98

or

sudo docker run --name Home-Assistant -p 8129:8123 --log-driver json-file --log-opt max-size=5m --log-opt max-file=3 --restart always -v /folder:/config -e TZ=Europe/Amsterdam homeassistant/home-assistant:0.98.0

I am running supervised home assistant currently in docker on raspbian Raspberry pi 4 4GB. I wanted to limit ram as showed by DIY-techie on last picture.