If your HAOS vm is offline/stopped and you backup VM from proxmox, it definetly not taking the NAS data as the machine is offline and proxmox have no way to read and mount it from stopped VM
What i am thinking is the problem that you have started frigate before you setup the network mount and you have some recordings left there.
make sense, I tried your recommendation my backup is now 90GB even though it is shutdown and I backed up as Stopped as opposed to Snapshot. I was wondering if those frigate files are parked somwhere.I can definitely confirm that they are in the NAS as I accessed the files whilst the HomeAssistant VM is shutdown. Is there a shell or CLI command that will allow me to harmlessly unmount the NAS , then after I backup I can re-mount without breaking anything.
Try removing the mount command from fstab. You can do it by running my script, press enter on every question but on the question about sda, write sda and then it stop proccessing you can go to another console write āsudo mcā and then navigate to tmp sda3/sda5 and change the mount location to different folder any folder so its not the one that is used in media Home Assistant. Then boot to HASS and look to media and delete everything.
I have 3TB of frigate on my NAS, and the backup uncompressed is 32GB
my home assistant drive has been constantly 100% full. Before Frigate my home assistant storage has always been around 20% for the last 1 year. definitely has something to do with the frigate mount which I cannot figure out. Is there a way to find out where the files are accumulating? I deleted all the frigate files from the NAS but the Home Assistant is still 100% full.
I know for sure the mount is writing to the NAS but at the same time, I think it is creating files in the home assistant drive too which I donāt know where and why. I have uninstalled the Frigate but donāt know where to find those files. can somebody pls advise of share if you have the same experience?
Frigate is spitting error that cache is full
[2022-09-05 10:22:28] frigate.record ERROR : [Errno 28] No space left on device: ā/media/frigate/recordings/2022-09/05/10ā
[2022-09-05 10:22:33] frigate.record ERROR : Error occurred when attempting to maintain recording cache
[2022-09-05 10:22:33] frigate.record ERROR : [Errno 28] No space left on device:
but that location is on my nas and I only have 150GB out of 1TB allocated there. I feel that it is caching elsewhere within the drive of Home Assistant even though the clips, recordings and snapshots are writing to the NAS.
Hi All, quick update I did a fresh install of home assistant. the only difference now is that I coudlnt squash the sda5 anymore. In the past, I can do 2 sets of steps advised in the instructions for both sda3 and sda5. sda5 is returning an error. Anyways so I just ignored it and went ahead with my installation. Everything seems to be fine for a few days in haos front. Can somebody advise if there would be a future problem if only sda3 was squashed and mounted? granted that sda3 and sda5 are identical.
Also another issue now is I cannot full backup, for this one I think I need to unmount the nas because the supervisor is saying that the .mp4 files cannot be backed up. any thoughts relating to this issue would be greatly appreciated. thanks all
Hello all, I still havenāt resolved this issue, my process is slow because it takes a few days before the home assistant folder gets full. I think I may have an Idea of whatās happening just donāt know why itās happening as I followed the guide on this topic per the letter. I think my clips and jpg snapshots are going to a folder /media/Local Media whilst the 24/7 recording goes to the /NAS/frigate/recordings folder. I prefer all files to go to the NAS but for some reason, Frigate installation chooses to write to a local media. Also when I squash the installation I can only squash sda3, not sda5 (I was getting errors such as cannot find an end block or something). Appreciate any assistance, thanks in advance.
I think you didnāt mount the whole media folder did you?
fstab ā /mnt/data/supervisor/media
You mounted /mnt/data/supervisor/Frigate?
On regards the sda5 error my thoughts are that you might have fault disk. You are using VM or raspberry?
And if you want to backup HASS, you just turn off the NAS, make a backup and then turn on NAS and reboot HASS.
As with this solution as i heard when you backup you are taking the whole NASā¦
Trying to hang on this thread even though itās closed. Iām new in the forum, should this have been a new thread instead?
I have created a cifs file system using Azure Storage Account and Azure Files that I would like to mount on my HAOS running on my Pi3.
Azure gives an extensive description how to mount the drive according to below (I have changed identities and passwords to avoid finding spam in my drive ).
I also removed the sudo command, doesnāt look like sudo is supported/needed in HAOS (yes Iām a newbie in HA)
I can connect fine to the file share using other device
But will this ever work on HAOS, Iām getting this error:
mount: mounting //xxxxtestingazurefiles.file.core.windows.net/test on /mnt/test failed: Permission denied
sudo mkdir /mnt/test
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/xxxxtestingazurefiles.cred" ]; then
sudo bash -c 'echo "username=xxxxtestingazurefiles" >> /etc/smbcredentials/xxxxtestingazurefiles.cred'
sudo bash -c 'echo "password=FU0vr6PlVK7DFFbP42PReK2sjiKpBWxzcHNfbNSCGeZ1g+MiHzBkZeypCUfV7XS6yRSjDMYHoNJ1+AStvWfNXQ==" >> /etc/smbcredentials/xxxtestingazurefiles.cred'
fi
sudo chmod 600 /etc/smbcredentials/xxxxtestingazurefiles.cred
sudo bash -c 'echo "//xxxxtestingazurefiles.file.core.windows.net/test /mnt/test cifs nofail,credentials=/etc/smbcredentials/xxxxtestingazurefiles.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30" >> /etc/fstab'
sudo mount -t cifs //xxxxtestingazurefiles.file.core.windows.net/test /mnt/test -o credentials=/etc/smbcredentials/xxxxtestingazurefiles.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30
I am using VM on Proxmox 7 running on a Dell Server. I believe the disk is OK because I am encountering SDA5 error on a newly installed HASSOS on a new VM. Itās weird because when I did this the first time using my original HASSOS VM, I could squash both SDA3 and SDA5. I was wondering if there has been a change.
I can confirm that I did mount to the /mnt/data/supervisor/media also when I followed the instructions I followed exactly as it is prescribed in the instructions in this thread.
I tried again today on a fresh home assistant install Iām still getting error on SDB5 (SDB3 is fine)
I also tried the awesome GPARTED method using your image , same error
when I continue, below is what happens
Hi , I was attempting this process but cannot even get past the SSH and private key part. I did exactly as instructed but I cannot still SSH to the supervisor. question in the syntax below:
ssh [email protected] -p 22222 -i .ssh/homeassistant_rsa
does āhomeassistant.localā can be the IP of the home assistant?
also the .ssh/homeassistant_rsa is this referring to the private key?
what do you mean when you say save the key pair? I can save the public key part on the USB but the private key should it be saved somewhere should I be on that folder when I ssh? is your āhomeassistant_rsaā the private key? thanks , I think your solution is cleaner I just cannot even get past the first part. also is there any change in the process to get to 2022.0 version of HOme assistant?
Interestingā¦ Are you doing it through my image?
On line 30 is a if statement that is checking if tmp sda5 etc fstab exists.
So this is something that you need to check if it did created the sda5 folder.
This might be system problem not the script. As i donāt know if there is enough space left there.
Maybe you created small disk for HASS?
I would need to check remote as there might be a lot of checks that need to be seen.
If sda3 is mounted correctly which it is, so the problem must be in storage space on booted device where you are running the script. Better maybe RAM as i donāt know if the tmp is on RAM as live image can not grow.
But now i can see that the script is not the pretiest. I think when i have time will do it different way. I will put the script to config folder so anyone can edit it right away without resquashig. And the config folder would be accessible through file editor. What do you think?
I think thatās a great idea. Yes I am using your gparted image and running the script /home/execute.sh
I tried increasing the memory of gparted VM to 8GB and to 2vcpu. I am encountering the same error. Would you think it has something to do with the haOS 9.0 when I had success with this squash method it was haOS 8.5 Iām not sure if there has been a change in filesystem of the HAOS itself that no longer allow to squash sda5 (just assuming). by the way I am doing all of this troubleshooting in a freshly installed HAOS with nothing in it to isolate the issue and rule out other factors.
Your Gparted image is awesome, its just that when its time to backup (e.g. VM, or backup within HAOS or SAMBA backup add-on) things start to get funky. Also my Frigate when I install it it somehow split the folder to media in home assistant and media in NAS.
side question: What will be the issue if I donāt squash SDA5? I have attempts whereby I just squased and FSTABd the SDA3 but not the SDA5 I didnāt see any observable issue. I understand they are like copy of each but what will be the issue if the script only took effect on SDA3?
Do you have any tips how to SSH to the supervisor? I have gone through many instructions online with no success. I even tried an add-on to allow ssh to 22222 but no luck as well. I tried ssh via ubuntu, via putty with puttygen none is working. Thanks in advance for your advise.
see my recent attempt using your gparted and running execute.sh
It might be possible as i am still on 2022.8.1.
I am scared to update now
Never tried as it seems for me harder to do.
When i will do a new image i will definitely update to OS 9 and check that.
Regarding sda3 sda5 if its working for you then its fine. Because when you boot HAOS you see for a split second a boot menu where you can choose to boot sda3 or sda5 if i understand correctly
btw did you run it as sudo? just asking if no, can you check again?