Add-on: Home Assistant Google Drive Backup

I have set up new HA instance (on Proxmox) and now I would like to restore from snapshot. The issue is that I can not “upload” the image from google drive to HA. I always get 504 error at 36% and then it stops at 37%.

My snapshots are more than 4gb

Did you find a solution for this? I just installed this add-on and I’m also using a Synology.

1 Like

I did but you probably won’t like it…

How are you running on Synology? Using the native package from the community? If so, it’s been pulled and doesn’t work any longer.

I ended up taking a backup, and migrating to a VM on Synology using the HassOS image. Was super easy to cut over to and it’s running perfectly now (also no more unhealthy state).

1 Like

Thank you for this addon. It’s very useful!

1 Like

How can I turn off this feature “Uploads snapshot to Drive, even the ones it didn’t create.”?
I want to make a snapshot every week or so on my nas, and only sometimes on Google Drive. Is this possible? Thank you.

1 Like

Cant find the update. Did the Reload thing but no new version

Latest should be 0.103.1. If you’re sure its not seeing that version, do the reload again (Supervisor > Addon Store > "..." > Reload) then check the supervisor logs (Supervisor > System), since if it had trouble checking for a new version its most likely a problem in the supervisor and there should be some error message there.

@Uzirox Its not possible, but I’ll consider adding it as a feature in a new release.

2 Likes

Hi all,

backup_directory_path: ''

Can someone tell me what I can do with this?

Reason I ask is because when I go into my Nginx Proxy Manager docker I see the path:

/backup

Inside this I see the snapshots that are created…
But it is strange that those files are stored inside de NPM Docker and not inside Home Assistant Docker

Take a look at the volume bindings for your containers using docker inspect, eg.

$ docker inspect -f '{{.HostConfig.Binds}}' homeassistant
$ docker inspect -f '{{.HostConfig.Binds}}' addon_cebe7a76_hassio_google_drive_backup

You will probably see that, just like the HA and Google Drive Backup add-on containers, the nginx proxy manager container also has a volume binding for /backup, very likely to the same host path as the other two containers. That means the very same host path is available within each container as /backup.

I see the /backup in HA as /share/backup and see the snapshots
When I go into NPM via portainer and create a folder then in HA /share/backup I see same folder

I think I did a mistake in way how this work…

/share/ folder will be visible also in the NPM docker.

There is nothing wrong and strange… sorry for the confusion.

root@ha:/usr/share/hassio/tmp# docker inspect -f '{{.HostConfig.Binds}}' addon_cebe7a76_hassio_google_drive_backup
[/dev:/dev:ro 

/usr/share/hassio/addons/data/cebe7a76_hassio_google_drive_backup:/data:rw 
/usr/share/hassio/homeassistant:/config:ro 
/usr/share/hassio/ssl:/ssl:ro 
/usr/share/hassio/backup:/backup:rw]

root@ha:/usr/share/hassio/tmp# docker inspect -f '{{.HostConfig.Binds}}' addon_a0d7b954_nginxproxymanager
[/dev:/dev:ro 
/usr/share/hassio/addons/data/a0d7b954_nginxproxymanager:/data:rw 
/usr/share/hassio/ssl:/ssl:rw 
/usr/share/hassio/backup:/backup:rw]

root@ha:/usr/share/hassio/tmp# docker inspect -f '{{.HostConfig.Binds}}' homeassistant
[/dev:/dev:ro 
/run/dbus:/run/dbus:ro 
/run/udev:/run/udev:ro 
/usr/share/hassio/homeassistant:/config:rw 
/usr/share/hassio/ssl:/ssl:ro 
/usr/share/hassio/share:/share:rw 
/usr/share/hassio/media:/media:rw 
/etc/machine-id:/etc/machine-id:ro 
/usr/share/hassio/tmp/homeassistant_pulse:/etc/pulse/client.conf:ro 
/usr/share/hassio/audio/external:/run/audio:ro 
/usr/share/hassio/audio/asound:/etc/asound.conf:ro]

backup_directory_path is a configuration option I use for deveopment, its the file path to the backup folder. Setting it to anything will override it and can only make the addon stop working properly. You’re seeing it because a recent supervisor change fills all of the unspecified addon config options with default values like the empty string or 0. I have strong opinions on why this is bad but there isn’t much I can do about it except complain at the people who made it. To be honest its kind of a sore spot for me at this point.

I’m releasing an update that removes a lot of the more confusing options that you guys shouldn’t see, but in general I’d recommend using the settings UI inside the addon to make settings changes instead of the interface in the supervisor. If there is a configuration option that does something useful I’ve included it in that UI. I don’t hide functionality through obscurity.

Hi. Im having some trouble with the addon.
I keep getting a Connection Timeout.
Only thing I have changed is that I started usin Nabu Casa. Can it be the problem?
Tried googling, but couldn’t find a answer/solution

I tried to search in the thread but it’s a very long thread and I didn’t see it so apologies if this has been addressed but… I have a hassio install on a pi on the latest version and when I install the add-on and then try to start it I get the following error. I tried removing it and installing it again and the same issue. I haven’t even gotten to a point where I create a configuration.

21-04-01 22:22:09 ERROR (MainThread) [aiohttp.server] Error handling request
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/aiohttp/web_protocol.py", line 422, in _handle_request
    resp = await self._request_handler(request)
  File "/usr/local/lib/python3.8/site-packages/sentry_sdk/integrations/aiohttp.py", line 123, in sentry_app_handle
    reraise(*_capture_exception(hub))
  File "/usr/local/lib/python3.8/site-packages/sentry_sdk/_compat.py", line 54, in reraise
    raise value
  File "/usr/local/lib/python3.8/site-packages/sentry_sdk/integrations/aiohttp.py", line 113, in sentry_app_handle
    response = await old_handle(self, request)
  File "/usr/local/lib/python3.8/site-packages/aiohttp/web_app.py", line 499, in _handle
    resp = await handler(request)
  File "/usr/local/lib/python3.8/site-packages/aiohttp/web_middlewares.py", line 119, in impl
    return await handler(request)
  File "/usr/src/supervisor/supervisor/api/security.py", line 135, in system_validation
    return await handler(request)
  File "/usr/src/supervisor/supervisor/api/security.py", line 197, in token_validation
    return await handler(request)
  File "/usr/src/supervisor/supervisor/api/utils.py", line 65, in wrap_api
    answer = await method(api, *args, **kwargs)
  File "/usr/src/supervisor/supervisor/addons/addon.py", line 627, in start
    await self.instance.run()
  File "/usr/src/supervisor/supervisor/utils/__init__.py", line 32, in wrap_api
    return await method(api, *args, **kwargs)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/src/supervisor/supervisor/docker/addon.py", line 467, in _run
    "Starting Docker add-on %s with version %s", self.image, self.version
  File "/usr/src/supervisor/supervisor/docker/addon.py", line 58, in image
    return self.addon.image
  File "/usr/src/supervisor/supervisor/addons/addon.py", line 394, in image
    return self.persist.get(ATTR_IMAGE)
  File "/usr/src/supervisor/supervisor/addons/addon.py", line 159, in persist
    return self.sys_addons.data.user[self.slug]
KeyError: 'cebe7a76_hassio_google_drive_backup'
21-04-01 22:22:09 INFO (SyncWorker_0) 

@calisro what you’re seeing is definitely a supervisor bug. There are many problems the supervisor can run into and they’re all very mysterious. There are a lot of things it caches and … things can just go wrong all over the place in ways that look random if the docker environment it creates isn’t just perfect. What I’d try doing first is:

  • Make sure the supervisor/HA is up to date
  • Uninstall the addon
  • Remove the addon’s repository
  • Restart the host machine
  • Add the repository back and reinstall the addon
  • Stop here and check the supervisor logs there might be a more helpful error message there from then the supervisor was starting up or during the addon’s installation.
  • Start the addon

This solves like, 90% of the problems people run into with the supervisor.

1 Like

Amazing

quick and painless install - works perfectly.
thank you for your efforts.

Amazing Add-on!!
Could someone help me with a automation that notifies my phone everytime there is made a new backup?
I only managed to make one that tells me if there is an error with the backup.

First, I’d advise against having a notification when a snapshot is created because notifying when things are working correctly contributes to alarm fatigue.

But if you still insist:

  • First create a template sensor in configuration.yaml that stores the name of the latest backed up snapshot:
    sensor:
    - platform: template
      sensors:
        latest_backup:
          value_template: >-
            {% set found=namespace(snapshot=None,time=0) %}
            {% for snapshot in state_attr('sensor.snapshot_backup', 'snapshots') %}
              {% if as_timestamp(snapshot.date) > found.time and snapshot.state == "Backed Up" %}
                {% set found.time = as_timestamp(snapshot.date) %}
                {% set found.snapshot = snapshot %}
              {% endif %}
            {% endfor %}
            {{ found.snapshot.name }} ({{ found.snapshot.size}})
    
  • Then make an automation that sends you a notification when the most recently backed up name changes, ignoring when the state changes from an unknown value (when HA starts up initially).
    - alias: Notify for a new backup
      trigger:
      - platform: state
        entity_id: sensor.latest_backup
      condition:
      - condition: template
        value_template: '{{trigger.from_state.state != None and trigger.from_state.state
        != '''' and trigger.from_state.state != ''unavailable''}}'
      action:
      - service: notify.your_notification_target
        data:
          title: New Snapshot
          message: Created and backed up {{states('sensor.latest_backup')}}
    

Note that this will also trigger an erroneous notification if you delete the latest snapshot and it would also be possible to create this using only an automation. The logic for solving these problems would be pretty complicated.

2 Likes

Hello.
With a new installation (Home Assistant OS 5.13, supervisor-2021.04.0, core-2021.4.4), i can’t install this add-on.
The msg : 21-04-14 15:06:43 ERROR (SyncWorker_3) [supervisor.docker.interface] Can’t install sabeechen/hassio-google-drive-backup-aarch64:0.103.1 → 500 Server Error for http+docker://localhost/v1.40/images/sabeechen/hassio-google-drive-backup-aarch64:0.103.1/json: Internal Server Error (“layer does not exist”).
This add-on work fine with my previous installation.
Help ! Please !