Running out of space due to /var/log

I have a Synology DS920+ Running a virtual Machine for Home Assistant

I adjusted the size of my virtual machine to 50GB while i figure this out.
I narrowed down the culprit to the /var/log folder
Image of var folder stats

I checked my influxdb sizes and they seem okay
influxdb sizes

home-assistant_v2.db size is 300MB which is normal for me.

What logs are being sent to var folder?
How can I figure out what’s filling it up?
How can I prevent it from happening?

Looks like systemd journal takes up the space. The bottom answer from the link below should be able to remediate.

When I try to run that on the community version SSL on home assistant there is no etc/systemd directory.
I then went on my synology via ssh and that location doesn’t exist either.
What am i doing wrong?

I don’t run HA OS. I hope someone else will help you locate the config. The link below provides some possible locations

Understanding Systemd Units and Unit Files | DigitalOcean

Thanks for the suggestions, MarcinL, but it does not help. The location of files is not necessary as one can just run journalctl --vacuum-size=100M and add it to crontab, to regularly trim the journal.

The problem is that all this refers to systemd, but from what I can see, HAOS does not run system, but something else (it is based on Alpine linux). This means ‘journalctl’ nor its config files are not present in the system. Meanwhile, /var/log/journal/ swells like a motherfkr.
The problem is not filesize limit, because the largest file is 128MB. But they are multiple, and keep multiplying, and none of them seem to get deleted even though they have ever-increasing date of last modification.

same here. Did you figure out how to clean that out?

I have the same problem and system keep crashing now, how do you solve this permanently?

Ditto. How do we reduce/limit the clutter? I only started building my system a couple weeks ago and it is already using over 2G.

I’m also seeing the same issue and would like to reduce the space used by the journal system on HASSOS. Right now I have 8GB used by the journal and this is way to much for something that I don’t even use.

Anyone?

If you try to delete these, you get an error that it’s a read only filesystem…

mine is 7 GB in size, and it’d be great to shink this thing.

Cheers,

Found a solution that worked for me.

My problem was that I kept running this inside SSH which is actually the SSH addon docker container. That’s why I couldn’t run any of the journalctl --rotate, and journalctl --vacuum-size=100M commands. I also couldn’t delete the /var/log/journal files manually (Error: Read Only File System).

When I manually logged onto the physical HASS OS instance machine, I could run the journalctl commands, and managed to free the disk space I wanted!

Hope this helps someone :slight_smile:

2 Likes

Hi.
I am having a similar issue but the files are located in the folder
/var/log.hdd/journal/4d4f6d7f9df948e6afd380c743b4ba53
and there are more than 70000 files in it - Over 24 GB.


Can anyone please help me?
Thanks.
Miguel

it is probably backup - log to your HA instance and see how many backups you have