Failed backups can leave large temporary folders in the supervisor (in /mnt/data/supervisor/tmp). These files can use much disk space and are difficult to remove requiring access to the supervisor container eg using a console session, or use of SSH on port 22222. While possible, it requires some effort and I found I had 60GB of rubbish which needed removal. An automated process to purge these tmp files would assist. Or maybe an automatic process which runs occasionally. Some of my files were over a year old.
Anyone ever find a good solution to this? Can the files/folders be safely deleted by SSH?
Hello, it seems that I have the same issue as yours. Anyone has any troubleshooting advice for this ? My filesystem is nearly 100% used space.
I found this difficult, hence the request. My solution was to SSH using port 22222, which requires setting up keys etc. Unfortunately I forget the details of how I did it. The developer debug page Debugging the Home Assistant Operating System | Home Assistant Developer Docs may help, but note it comes with clear warnings it is not for end users. There are other hints scattered in the community posts. It took me many attempts to gain access. Given the speed of HA development there may be easier ways now.
Thank you for your answer. I’ve tried the SSH on 22222 following the guide on the developer site. But I think I’m not enough skilled to do this. Maybe I’ll try another time if I have a few hours free…
Follow the instructions here to enable SSH to your Home Assistant Docker Container on Port 22222:
Then, once you’re able to log into your system using a command similar to:
ssh root@homeassistant -p 22222
You’ll be greeted with the following:
user@host ~ % ssh root@homeassistant -p 22222
Welcome to Home Assistant OS.
Use `ha` to access the Home Assistant CLI.
#
You can then move into the appropriate directory by issuing the following command(s):
# cd /mnt/data/supervisor/tmp
Optionally:
# ls -la
To see the files listed there.
You can then use ‘rm’ to remove all of the files that start with tmp (these are all safe to delete):
# rm -rf tmp*
Type ‘exit’ to end your session
For those running in QEMU, you can mount the disk that you run your virtual machine from and clean it there manually. See How to find out what's taking space - #2 by kotrfa
Note that I wiped everything in tmp via rm -rf *
and homeassistant didn’t start because the container couldn’t bind bind source path does not exist: /mnt/data/supervisor/tmp/homeassistant_pulse
. I had to manually add this file into that place to make it work (maybe an empty file would work too?):
# This file is part of PulseAudio.
#
# PulseAudio is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# PulseAudio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with PulseAudio; if not, see <http://www.gnu.org/licenses/>.
## Configuration file for PulseAudio clients. See pulse-client.conf(5) for
## more information. Default values are commented out. Use either ; or # for
## commenting.
default-server = unix://run/audio/pulse.sock
; default-dbus-server =
autospawn = no
; daemon-binary = /usr/bin/pulseaudio
; extra-arguments = --log-target=syslog
; cookie-file =
; enable-shm = yes
; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB
; auto-connect-localhost = no
; auto-connect-display = no```
Hi.
So late reply, but one of solution that
go into the ssh > cd /backup > rm *.tar
There is dangling/orphan tar files which couldn’t get registered in the backup list.
To archive this goal, easy way,
install below ssh and turn off ‘protect mode’ at the config page.
(Leaving this reply for the record for the rest of the world and future who chasing HA)
Hi everyone,
I need to revive this topic.
I am on:
Core
2025.4.2
Supervisor
2025.04.0
Operating System
15.2
Frontend
20250411.0
Because my internal HDD is on its limits I wanted to migrate it to a larger one.
For preparation I tried a backup (on OneDrive), but failed.
Usually I have about 10 GB free space, but after the backup failed, it’s completely full.
I checked the /backup folder and deleted the *.tar files, but there must be something else (like a temp file I guess).
Any location I need to check?
(In the past I just did a reboot and the temp files obviously were cleaned up.)
Thanks in advance.
EDIT:
Sorry guys, but I am lost how to find the problem:
➜ / df -h *
Filesystem Size Used Available Use% Mounted on
/dev/sda8 58.0G 56.1G 0 100% /share
/dev/sda8 58.0G 56.1G 0 100% /share
/dev/sda8 58.0G 56.1G 0 100% /share
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
/dev/sda8 58.0G 56.1G 0 100% /share
/dev/sda8 58.0G 56.1G 0 100% /share
devtmpfs 3.7G 0 3.7G 0% /dev
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
/dev/sda8 58.0G 56.1G 0 100% /share
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
/dev/sda8 58.0G 56.1G 0 100% /share
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
proc 0 0 0 0% /proc
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
/dev/sda8 58.0G 56.1G 0 100% /share
overlay 58.0G 56.1G 0 100% /
/dev/sda8 58.0G 56.1G 0 100% /share
sysfs 0 0 0 0% /sys
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
overlay 58.0G 56.1G 0 100% /
Tried and Failed to access these files here. Solution was to create another VM in proxmox and restore a backup. The onboarding screen was stuck on the restoring please wait and don’t refreh the page, but after opeening it in another tabs everythings was working.
Saved about 12GB on a 32GB storage for a couple years old HA instance. An easier way to acces and delete these files will be welcome.