So open a topic of your own and supply details of your issue other than just saying “it does not work”.
As somebody running Docker on a Raspberry Pi 4 who was looking to restore it with a backup from HA OS, installing to a container is considered one of the basic installation methods, not advanced. If you are going to consider it advanced can you move where you list it and add the warning to it?
You can. There are links at the bottom of each document to submit edits the page.
I think what’s clear coming from what I’m reading here is,
If backing up from HA OS, then only restore in HA OS
If backing up in Supervised, then only restore in Supervised.
And so on…
I also cannot restore from backup
im running HA on pi , and installed an update, but somehow i got a system unhealty error
so try to set restore a backup( settings, system, backups).I can select a backupfile and im getting message
Are you sure you want to wipe your system and restore this backup? then i select restore and nothing happens after that …
I probably should start a new thread but seeing as this one is a the top of the pile…
I am trying move from SD to SSD on a Pi 4. I carried out a new install to the SSD which seems to work fine, then whether I do the restore from the onboarding screen, or create an account and do it from HA, it goes for hours without showing any update on the screen and although the lights are flashing on the drive cable and the Pi, nothing seems to happen.
The backup size is about 160mb. I have also tried with older backups, even ones from months ago always the same. I’ve used the CLI screen but can’t see any errors in any logs.
I’d welcome any suggestions but two queries.
-
What is about the longest a restore should take (only cos I’m waiting hours before retries)?
-
Does it matter that the backup was taken while on one IP address which is not the same as the static address of the Pi? (It isn’t that the restore has competed but my browser is still on the other IP btw!!) - UPDATE I’m about to set the IP before I do the restore…
EDIT: wrt question 2 I can confirm that it does not put the IP config the same as the source machine, as I just restored a small backup taken earlier today
No that is not correct. Many people have successfully moved installation types using a backup.
It’s just that you have to manually restore the backup by copying files from the backup archive if you are moving from an instal type with a supervisor (HAOS & Supervised) to one without it. Likewise if restoring a backup for disaster recovery on a system without a supervisor.
Yea that’s exactly how I did it.
Yet, here we are a year later and my HAOS x86 backup(s) just hang at a fresh install.
I may be geeky enough to get this fixed, but I shouldn’t be in this situation.
This isn’t going to fly in production as the user base grows beyond the geeks.
While I don’t agree with his aggression, I do admire his effort in telling you to fix your attitude as well, while it may work for you, it sure doesn’t work here and it sure doesn’t give any indication as to wtf is happening, or not.
It’s a spinning wheel that never ends, doesn’t give any input as to what’s failed. The file uploads fine, gets detected as a backup, so it’s at least not a totally corrupt tar file.
People shouldn’t be expected to be a super user to make this work and I’m considering that more from a permissions perspective.
Would you like some assistance with your issue?
Nah, it would appear that the backups I have are only partial and the also appear to be corrupt.
I’ve mounted and archived the data partition from the drive and am cherry picking things back together. Both slots have gone corrupt and fail to boot. Slot A hangs at a blinking cursor and Slot B hits a squashfs corruption issue , panics and reboots. Logs and DB’s corrupted and filled the tiny 32 SSD which I assume to be failing.
The drive in question filled itself and corrupted along with the announcement, can’t say if related or not, so I’m renewing smarthings and such along the way.
512GB installed now fresh balena-etcher x86 flash, coming back together.
What I would suggest here is that the json file that’s included in the backup file contain a crc/md5 or better of the included backup tar. This backup can then be checked against this value.
I’m able to unpack the hex.tar to an @PaxHeader , backup.json and homeassitant.tar.gz
Unpacking the tar fails. This kinda seems like the backup that got created failed.
What I would like to see here in the backend unpack the file and verify the checksum before giving the option to restore, and even hinting at the ability to be able to restore that file.
Verifying the backup tar at creation would be key here to be sure you have a valid checksum and not just a sum of a corrupt tar.
Can confirm backups are broken.
Wow, years worth of lost data. Home Assistant, what an awesome platform.
New here. After two weeks I got HA running in containers, etc but decided to try a HAOS only install on a NUC. It worked (except for no Python to add my Bluetti AC300). I added quite a few add-ons and configured a lot of home systems. However it would not boot correctly and would hang up requiring manual intervention (type EXIT). OK so I decided to re-install. I did a full backup. Reinstalled. Opening screen showed option to restore from previous backup. Cool. Except no reaction. So I did the normal new customer login and figured I would just go to restore backup in Settings. Except that does not work since I cannot point it to any external directory where the file is located. Any help appreciated or is this just a bug that needs fixing?
it’s unbelievable that in 2023 there are still problems with backups, it’s been 2 weeks with many hours a day that i’m trying to migrate home assistant on my server, i’ve seen several tutorials, i even asked for help on this forum: Hassio migration backup problems
I even made a VM with haos_ova-10.2.qcow2
which is the exact same version as the one I have on the Raspberry, the only difference is that one is aarch64 and the other x86_64, in all this time I couldn’t get anything working, maybe the only way would be to take the ssd connected to my Raspberry where inside there is hassio and connect it directly into my server? and with some command make it boot directly inside the vm where I have haos_ova-10.2.qcow2
I think there is at least one command to run in the CLI interface to boot it from the ssd or to import all the files inside it or not?
I understand your pain…
The best solution I found was to simply install HA on your new device from scratch. Then add all your integrations. After that, you can try to copy/paste all your yaml files into your new configuration. At that point, some weirdness started to manifest. I then spent the time commenting out lines within the yaml files associated with any entities/integrations that were not working right. I have no idea why lines of code which worked well in a previous install would not work in a new install. But what I believe I ended up doing to resolve that was completely remove the integration and re-install it from scratch. Allow it to rebuild itself. Perhaps some glitched out in the process of copying over the yaml/configuration files themselves.
I hope this helps!
I think this post should not be flagged as inappropriate. Instead it should be a warning shot to anyone who does not have the skill or the money to get HA properly deployed, for deployments which are critical to function to commercial/industrial standards.
@Talk2Giuseppe instead of blaming HA for your inadequate deployment, you should blame yourself IMHO. Why? Because if you have a critical system the least you have to do is have a proper recent backup.
Your problem cannot be resolved by going down another route. Why? Because in any other controller-based system is you lose the controller you lose your entire system; and if you do not have a backup you are royally snookered.
At least with HA we can deploy it in a high availability environment where you can get up to tier-4 datacentre protection (SLA). HA can be deployed with no single point of failure, even in scenarios where disaster recovery and business continuity are required.
No other domotics controller can provide that, with the exception of node-red. So your problem with your present mindset cannot be solved by any other platform, regardless how hard you try.
The point is, there is an option to
“Backup”
But no option to
“Restore”
If you cannot restore, don’t provide the option to backup.
I am long past this thread as OP, but to be honest, it’s pretty weak. It just makes it more confusing for those that struggle more.
There is an option to restore. Copy the config folder to the new system.
Also please make sure you copy your backups off your server. There are many automated ways to do this. Common tasks - Operating System - Home Assistant