Disk space being eaten up

Something is eating up my disk space at an accelerating pace:

Going with the method of elimination, I can exclude the following possible causes:

  • I’m running Home Assistant Operating System on a 8GB Raspberry Pi 4 and a 32GB SD Card
  • home assistant database is not local, it’s connected to a NAS
  • the spikes are caused by automated backups. The massive drop on 20th Oct is caused by changing the local backup amount to 1 instead of 3. It’s interesting that the spikes got smaller, though. Some recursive backupping seems to be going on…
  • I only keep one backup locally, the rest are uploaded to Google Drive. Some recursive back upping seems to be going on, as the backup spikes are smaller after

The Home Assistant file system doesn’t seem to have any large files or issues. It only takes up 1.1GB:

[core-ssh /]$ du -sh /
1.1G	/
[core-ssh /]$ du -sh /*
4.0K	/addons
4.0K	/backup
1.2M	/bin
52.0K	/command
965.1M	/config
84.0K	/data
0	/dev
1.9M	/etc
4.0K	/home
4.0K	/init
3.3M	/lib
12.0K	/media
4.0K	/mnt
4.0K	/opt
7.5M	/package
0	/proc
8.0K	/root
544.0K	/run
60.0K	/sbin
56.7M	/share
4.0K	/srv
36.0K	/ssl
0	/sys
28.0K	/tmp
139.0M	/usr
84.0K	/var

Then again, df -h shows that 17.1GB is used.

[core-ssh /]$ df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  28.6G     17.1G     10.0G  63% /
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /share
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /data
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /addons
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /backup
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /media
devtmpfs                  1.8G         0      1.8G   0% /dev
tmpfs                     1.9G         0      1.9G   0% /dev/shm
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /config
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /ssl
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /run/audio
tmpfs                   767.5M      1.7M    765.7M   0% /run/dbus
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /etc/asound.conf
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /etc/hosts
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /etc/resolv.conf
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /etc/hostname
tmpfs                     1.9G         0      1.9G   0% /dev/shm
/dev/mmcblk0p8           28.6G     17.1G     10.0G  63% /etc/pulse/client.conf
tmpfs                     1.9G         0      1.9G   0% /proc/asound
devtmpfs                  1.8G         0      1.8G   0% /proc/keys
devtmpfs                  1.8G         0      1.8G   0% /proc/latency_stats
devtmpfs                  1.8G         0      1.8G   0% /proc/timer_list
tmpfs                     1.9G         0      1.9G   0% /sys/firmware

So something between the host and the HASS OS is probably happening, but I’m not sure how to dig deeper.

What is

top b n 1

saying about the running processes and their used recources?

[core-ssh ~]$ top b n 1
Mem: 3665920K used, 263556K free, 64K shrd, 50968K buff, 2202816K cached
CPU:   2% usr   2% sys   0% nic  95% idle   0% io   0% irq   0% sirq
Load average: 0.34 0.53 0.53 2/879 366
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
  139   136 root     S    15988   0%   0   0% ttyd -p 8099 tmux -u new -A -s homeassistant bash -l
  357   138 root     S     3572   0%   2   0% sshd: root@pts/4
  138   135 root     S     3240   0%   2   0% sshd: /usr/sbin/sshd -D -e [listener] 0 of 10-100 startups
  230   229 root     S     2540   0%   2   0% bash -l
  359   357 root     S     2540   0%   0   0% -bash
  229     1 root     S     2108   0%   3   0% {tmux: server} tmux -u new -A -s homeassistant bash -l
  366   359 root     R     1332   0%   1   0% top b n 1
  135     1 root     S      208   0%   0   0% s6-supervise sshd
  136     1 root     S      208   0%   2   0% s6-supervise ttyd
   23     1 root     S      208   0%   2   0% s6-supervise s6rc-fdholder
   16     1 root     S      208   0%   1   0% s6-supervise s6-linux-init-shutdownd
   24     1 root     S      208   0%   1   0% s6-supervise s6rc-oneshot-runner
    1     0 root     S      204   0%   1   0% /package/admin/s6/command/s6-svscan -d4 -- /run/service
   18    16 root     S      196   0%   2   0% /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B
   31    24 root     S      180   0%   2   0% /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. --

push
I have the same issue - starting at nearly the same time.

1 Like

I removed local storage of backups by Home Assistant Google Drive Backup
and it seems to have restored to ca. 35% usage.

Have 0 local backups

Darn, I tried so many things that something else might have fixed it. Leaving here so you might look into these as well:

  • Run ha su repair
  • Updated HA Core & HA OS
  • Home Assistant Google Drive Backup stuff:
    • I checked and I store 1 backup in Home Assistant (the backup weighs 1GB)
    • I have selected the “Delete Oldest Backup Before Making a New One” option with Delete Backups After Upload, Ignore Other Backups, Ignore “Upgrade” backups unchecked
    • I checked “Keep Generational Backups”. Probably unrelated, but worth mentioning.
    • I unchecked “Partial backups”
    • I did an rm -rf /root/backup/*. This was when the big step-down happened. There was a lot of trash in this folder. I thought maybe it was doing some sort of a recursive backup, where each backup backs up what the previous backup had already backed up (try saying this quickly 3 times in a row :sweat_smile:)

Anyway, my year’s worth of Disk use chart looks like this now:

Hope this helps.

1 Like

tried all of those - my solution was - full backup. flash ssd - install backup :man_shrugging:

For me a 128gb SD card was filling up, but there were no big files in the container.

I had to use the community web ssh terminal by Frenck in HACS.

With it, you can login using SSH and then connect to the supervisor docker container. Disable protection mode in the addon settings.

Docker ps (to find the containers)
Look for the container ID of the supervisor container.

docker exec -it CONTAINERID /bin/bash (login to container)

du -a / | sort -n -r | head -n 20 (find big files)

In my case there were redundant backup files in /mount/data

3 Likes

You’re a star, i had 189GB of trash i couldn’t get rid of.

your command did the trick:

rm -rf /root/backup/*

1 Like

Did the trick for me as well on a NUC!