Awesome, this has a lot more detail. Much appreciated.
Exactly what I wanted
I’ve released version 0.96. This release includes a lot of options to help resolve DNS resolution issues. Please see the instructions I’ve posted in this issue you’re having that problem.
Refresh the add-on store page to make sure it sees the update!
- Adds Support for reading the snapshot password from your secrets file. Try setting the snapshot password to “!secret sercet_file_entry”
- Adds Better help messaging when it gets errors deleting old snapshots.
- Adds A bunch of debugging options for some users having DNS resolution problems.
- Fixes Some out of date info in the installation readme.
- Fixes An issue causing the add-on to use cached (old) version numbers for manual snapshots.
- Fixes Bugs while parsing DNS record entries
With this add-on working so great, I got to thinking…
What’s the best way to do a full restore if, say, the SD card in my RPi totally dies?
When I first set up HA, I jumped through quite a few hoops, getting the WiFi running, installing SAMBA and SSH, and generally building a base I could work from, before I even got to using HA.
I’m thinking I’d re-install Hass.io on a new SD card in the RPi, remove the SD card, copy the “backup” folder from Google Drive to that card using another computer, then re-boot the RPi off that card and restore the backup using either the Hass.io page in HA or this add-on.
At that point would I be all be back to normal? Even my WiFi network configuration, add-ons, etc.?
You just burn a new SD with the latest image and let HassIO install, then copy the Snapshot across to it (the SD card won’t be readable in a PC once HassIO is on there) so use SSH / Samba / whatever to copy it across via network. Then just do a ‘restore from image’ in the Snapshots menu
thanks, works without issue!
Thanks! I’m still new at this so let me break that down. Burn a new SD card with Hass.io and let it install. Configure it to work on my WiFi network (I remember there were a bunch of steps there!) Install/Configure Samba and/or SSH. Copy the files to the “backup” directory. Then just do the ‘restore from image.’
What about any other add-ons? Do I need to re-install or will the restore pick them up, too?
Do I have to pair devices again (in my case, mostly Zigbee) which were previously paired?
I guess I’m just not clear on what is, and what is not, included in the snapshot.
once you restore, it will all be back to normal including add-ons and devices etc.
Restoring a snapshot will restore all your add-ons(unless you’re making partial snapshots), their configuration, and anything in your /config directory (eg configuration.yaml). I’m not sure where the zigbee config is stored, but I’d be surprised if they aren’t included too.
Most people (I think), restore by doing:
- Burn HA to a new SD card with etcher (or whatever)
- Configure the network on the SD card (which is suprisingly complicated).
- Insert the SD card, boot it, wait for the install, then log into the new instance.
- Install the samba add-on
- Copy the snapshot to the /backup directory through samba
- Refresh the Hass.io snapshots page.
- Restore the snapshot that should now show up
In stead of using the samba add-on, I’ve also added a workflow to the Hassio Google Drive Backup add-on that lets you upload a snapshto directly from Google Drive (eg instead fo installing samba). I always run into trouble trying to get samba to work.
This add-on is great. I got wind of it as Dr Zzs posted a video on youtube about it.
https://www.youtube.com/watch?v=WcErD5PvIYw
As he pointed out, It would be nice if you had a “Buy me a coffee” link.
Just a though
@sabeechen thank you for helping me out that day i was able to get it sorted to the way i want it. Tested it for three days now and confirmed success.
I’m gonna leave this just in case someones interested.
For anyone looking to get the state of the addon processing the backup, here are the steps.
I have made this to help me see the status of upload and success on an a LED ticker.
*THIS REQUIRES NODERED
This has three states as seen on the Web-UI, which can be collected from the JSON from http://hassio.local:1627/status
- Pending - Creating Backup File
- Uploading - Upload progress with percentage
- Backed Up - Backup Successful
First we need to parse JSON using rest sensor for snapshot status data.
Sensors.yaml
#Google Drive Backup status
- platform: rest
name: Google Backup State
username: !secret ha_user
password: !secret ha_pass
resource: http://hassio.local:1627/getstatus
value_template: '{{ value_json.snapshots[2].status }}'
For PHONE Notifications.
Automation.yaml
- alias: Snapshots uploaded id: 'snapshots_uploaded' trigger: - platform: state entity_id: sensor.google_backup_state to: 'unknown' condition: - condition: state entity_id: sensor.snapshot_backup state: 'backed_up' action: - service: notify.iphones data_template: title: Snapshots Upload Success message: Backup Successful today {{ now().strftime('%Y-%m-%d %H:%M') }}
#note that the success message will display unknown from sensor.google_backup_state but not to worry, as the condition is checked with the state of sensor.snapshot_backup to confirm upload.
For LED TICKER
Download Flow
#The result.
additionally these messages also are returned on the LED.
Happy Tinkering.
Is anyone using this add-on with Caddy? My question may be more Caddy-related but I thought I’d check here first. I tried adding a section in my Caddyfile for the 1627 port used by default here, but I wasn’t able to access the web UI from https://my-url.duckdns.org:1627. I did have the add-on configured with “use ssl”: true and the correct paths to my ssl keys. I’m able to access the web UI from http://ipaddress:1627 so the add-on is loading correctly, I just can’t get ssl set up.
Its been a few weeks since the last release, today I released v0.97. Click the refresh icon on the add-on store page to make it see the update if it doesn’t show up for you.
It took a while to get this out because I rewrote the majority of the add-on to get pretty close to 100% unit test coverage. The code base was getting large and patchy enough that I didn’t really trust my manual testing to prevent future regressions. In addition this release includes:
- Added A dialog that halts backup if it ever attempts to delete more than one snapshot at a time. This helps to prevent accidental nuking of your snapshot history if, for example, you reinstall the add-on with default settings (happened to at least one user).
- Added Failover to known good (but still configurable) DNS servers if the addon runs into trouble resolving Google Drive’s IP address. A number of users have had trouble relying on the OS’s built-in DNS, which seems to be some problem with Docker.
- Added Better error messaging all over the place.
- Fixes A misconfiguration that caused setting to get reverted to default when using snapshot_name
- Fixes A number of race conditions. The addon will now ask you to wait for a sync to finish if you try to delete a snapshot while its syncing.
- Fixes Snapshot state sensor wouldn’t show up for to an hour sometimes when HA is restarted.
- Fixes A bug in generational backups that could cause the add-on to continuously create and delete snapshots if generational_days==0
After updating to 0.97.1 I get the same error message as before concerning DNS
Have drive_ipv4 set to 172.217.1.202
There was a regression that made it read drive_ipv4 incorrectly, I’ve released a fix in 0.97.3 just now.
Kudos to you sabeechen! Installation was flawless and appears to be working fantastic. Much appreciated!
@sabeechen thank you for a easy way to backup, really works so well. I have one request if possible the esphome created yaml’s, is there a way to add these to the backup. I tried just restoring esphome to see but did’nt get my yaml files back. Thank you kindly
Esphome keeps it’s yaml files in the /config directory, which gets backed up with snapshots like everything else. I use esphome too, and my configs get restored. Maybe you’re making partial snapshots and excluded the /config directory?
@sabeechen I’m trying to figure out why, since I’ve moved from an RPi running HassOS to a NUC running Hassio in Docker on Ubuntu, the time of my snapshots is off by an hour. The settings in the add-on say 0600, but they are happening at 0700. I’m still not sure why this is happening, but while investigating, I noticed a couple of typos on the Setting page:
Require SSL to acces this web interface. You must already have SSL keys configured to use this. Note that once this setting is chnaged, you’ll need to navigate to the new url to continue using this webpage.
Access needs the second s, and changed has letters reversed.
Thanks for noticing, I’ll fix the typos in the next release. I have the kind of brain that just can’t see typos until someone points them out to me.
That sounds like either a timezone or clock problem, the add-on gets both of these setting from Home Assistant. You can tell Home Assistant which timezone to use by following these directions. You may have it configured for a neighboring timezone that observes DST while you live in one that does not (or vice versa).
To make sure your clock is right, you can try adding {isotime}
to your snapshot name in the settings and then making a new snapshot. This will create a snapshot with the current UTC time in its name without any timezone or DST logic, which will tell you what time the Home Assistant gave the add-on.
If all of that looks right, you probably found a bug and I’ll need to dig into it more.
Thanks for the reply.
I have the kind of brain that notices typos. People often ask me to proofread for them
I think the trouble is likely to do something in my new Ubuntu set up. My time zone is correctly set in HA. I’m really tired and need to go to bed - will investigate further when my brain is functional!