Installing Home Assistant OS using Proxmox 8

UPDATE August 7
See my later post clarifying that this is due to a folder of log files growing, and that the size of this folder is part of HA design and has a maximum size built in (depending on the size of your allocated VM drive): Installing Home Assistant using Proxmox - #418 by rob1303

Not sure if this is the right thread to post this on, or whether it would make sense to start a new one. I will try here first.

I create an overnight backup of the Home Assistant VM - and it has grown by 5.5gb over the past 4 weeks.

Anyone have any ideas why my Backup of the Home Assistant VM grows everyday - when the MariaDB and Snapshots do not?

For information this is the size of each over the past 4 weeks:

HA - VM Backup Maria DB Snapshots in HA
Gb Gb Gb
3/08/2021 22.2 0.79 0.65
2/08/2021 22 0.77 0.65
1/08/2021 21.6 0.77 0.65
31/07/2021 21.3 0.75 0.65
30/07/2021 20.7 0.77 0.65
24/07/2021 20 0.94 0.6
23/07/2021 19.5 0.95 0.6
18/07/2021 19.03 0.95 0.6
17/07/2021 18.76 0.89 0.6
15/07/2021 18.03 0.95 0.6
12/07/2021 17.71 0.92 0.6
8/07/2021 16.73 1.4 0.6

Add-ons installed in the HA VM are:

  • Check Home Assistant configuration
  • HA Google Drive Backup
  • Mosquitto
  • Deconz
  • ESPHome
  • File Editor
  • MariaDB
  • NodeRed (only 2 flows)
  • SSH & Web Terminal
  • Samba Share
  • TasmoAdmin
  • TasmoBackup
  • phpMyAdmin

Proxmox, HA and superviser are all up to date and all add-ons are running the latest version (as at 3 August 2021).

Not even sure where to look to try to find out what is taking up the additional space.

Thanks for your help.

Try looking in the config/.storage folder to see if anything stands out?

UPDATE August 7
See my later post clarifying that this is due to a folder of log files growing, and that the size of this folder is part of HA design and has a maximum size built in (depending on the size of your allocated VM drive): Installing Home Assistant using Proxmox - #418 by rob1303

@tteck Thank you for the suggestion.

Unfortunately nothing strange there, but it did prompt me to try something very basic:

  • WinSCP into the VM
  • enabled show hidden files,
  • wilcard*.* search
  • sorted by size.

That highlighted this directory:

  • root/var/log/journal/xxxxxxxxxxxxxxxxxxxxxxxxx/

On my system there are up to 4 files created in this directory each day:

  • system@xxxxxxxxxxxx - created daily - approx 110mb (size varies around this point)
  • user-1001@xxxxxxxxxxxxx - created daily - 8,192kb (assume this is me)
  • user-1003@xxxxxxxxxxxxx - created most day - 8,192kb (assume this is my wife, who has phone app)
  • user-1004@xxxxxxxxxxxxx - created some days - 8,192kb (assume this is my eldest child, who has phone app)

User-1003 and user-1004 files are not created everyday - so I assume these files are related to either who has accessed the system or used the mobile app (I have 3 main users - some of whom do not need to access the system each day).

So now I have some questions about these log files:

  • are these files meant to be permanent?
  • should the system clear them?
    • if so what setting have I turned off?
  • are they ok to delete?

If the files are meant to be permanent does this just mean we have to live with the VM growing by ~130mb per day?

Those questions are above my paygrade :grimacing:
But, I wouldn’t think that’s normal.

1 Like

You can check if logrotate is working:

cat /var/lib/logrotate/status 

After x amount of time the logs should be zipped up and a new log starting, before x amount of logs are deleted or rotated.

An old article, but does explain it well: How To Manage Log Files With Logrotate On Ubuntu 12.10 | DigitalOcean

@markbajaj - thank you for your assistance.

I continued to monitor the drive and can see that older files are now being purged on a daily basis - and the folder remains at approx 4gb (It was still growing to that size as I monitored earlier).

I understand that by design the log folder will grow to max of 10/15% of allocated disk space (up to a maximum of 4gb if your VM drive is over 60gb).

See this post on github referencing this design (especially the post by balloob and clarification by mb018):

Based on that thread there doesn’t appear to be a way to reduce this folder manually.

But at least I now know that my VM backups are not going to keep growing indefinitely.

Once again - thanks for those that assisted.

2 Likes

HA install using this script worked fine however I’m not able to restore a previous made snaphot.
I did read some other posts here in this thread with the same issues but did not found any solution yet.

Snaphopt was created on a RPI4, completely up-to-date. Restore via landing page or first copy via SMB and restore from HA didn’t make a difference. I tried different snaphots.

After I hit restore I loose HTTP connectivity, I have no idea to track progress. The only thing I can see is that the VM still uses resources (CPU/MEM are changing), load is high as it seems it’s doing something but nothing seems to happen for hours… It’s a 600+ mb snapshot, how long should this take?

I can still access the HA console via proxmox but nothing seems to be changed over there. Is there anyway to track progress or access troubleshooting logs?

Is it possible that the snapshot had static IP address set that is different to the IP of the machine you are on?

Double post

Actually, yes!
At the source system (RPI) I have configured a static IP directly in HA (via supervisor > system > host > change IP) I am assuming this static IP will be restored after the restore of the snapshot. Maybe I’m wrong here??

Ofcourse, I shutdown the original PI (still running with that same fixed IP) during the restore process to avoid IP conflicts.

What is the best approach here? First set the new instance to the same fixed IP and then restore the backup or remove the fixed ip and configure back to dynamic IP and create a new backup to restore?

Oh man, you pointed me in the right direction. Thanks a lot!

I just tried the first approach and set the fixed IP of the new system to the same as the snaphot. While doing that I realized something very stupid…
The new system was using DHCP, receiving IP & DNS (yes…) setttings from my router. The DNS, however was pointing to Adguard, that was running on HA itself… Without working DNS the restore process would probably not work.

Anyhow, system is comming back to life right now :slight_smile:

1 Like

so, where is the fix? i didn’t understand, really, why grow up ?

How can we fix it?

@RobertusIT there is no β€˜fix’. There is no way (that I am aware of) to change how the system saves and replaces log files.

By default, the system is set up for the log folder to grow to max of 10/15% of disk space allocated to the VM (up to a maximum of 4gb if your VM drive is over 60gb).

Once it it reaches its allocation limits it will delete older log files when it creates new ones.

@kanga_who For the Portainer/Samba/MQTT install, do you install them onto a separate Debian/Ubuntu VM, existing HA VM, or in a container on Proxmox?

Beings that you’re running HAOS, might as well take advantage of the add-on store for those.

The idea was to run them outside HAOS for more resiliency if something goes wrong with HAOS

Then you need to run Home Assistant Container, and separate lxc containers for those

Screenshot 2021-08-18 1.50.23 PM

3 Likes

Great ty! This looks pretty close to the setup I want except having HAOS then having z2m, mqtt outside that plumb into the VM. Is that possible?

1 Like
5 Likes

This is awesome! Thank you so much! I get my NUC on Friday and am looking forward to getting it setup then migrating off my old VM solution

1 Like