I’ve been playing a bit with a .qcow2 image in KVM to “eventually” replace my old “core in Docker” setup. But I’ve noticed that it tends to use a lot of storage, and I can’t figure out why.
I have a pretty minimal setup, and only a few add-ons (since I’m just testing, I have everything in the old Docker setup).
I recently resized it from 20 GB → 30 GB, but a few months after, I could no longer upgrade Hass, due to no storage available. So I just added another 10 GB. But I can’t seem to figure out what’s taking up all the storage. I have only a single (~1.1 GB) backup, and the database is about 1.5 MB.
In the HassOS VM, I only have a subset of the things (add-ons, integrations, etc.) that I use in the Docker setup, and while the Docker persistent storage is only 2.3 GB, I can see that for the VM:
Indeed, home-assistant.log is 22 GB and home-assistant.log.1 is 142 MB!
Do I need to manually set e.g.:
system_log:
max_entries: 50
in configuration.yaml? I though this was only something to change if you wanted more/less than the default 50 (as described in the docs)? And I haven’t changed this manually…
Make sure you are not curing the symptom instead of the disease.
Make sure nothing is spamming the logs and configure the logger to error, not warnings
Good points, I’ve changed from debug to warning. But how do I make sure it doesn’t grow out of hand, regardless of how many “things” log? Do I have to set up a system_log.clear automation?
Also, I thought it only kept the last 50, as described in the documentation. Or did I misunderstand this? Is system_log not related to logger?
But still, why did it end up being tens of GB on the without deleting anything? Without solving that, it would probably still end up growing out of hand, just much MUCH slower.