Uploads are triggered via service calls to hassio.addon_stdin, allowing for automations and scripting of Dropbox uploads.
Uploads all .tar files in the backup directory to a specified output path, skipping files that have already been uploaded (No longer uploads entire directory).
Start the add-on once and leave it on. It will listen for messages via service calls, as detailed in the README.
Note: this means no upload is triggered when the add-on is started, only when a service call is made.
Original Post
Hey everyone,
Iâm working on a Hassio add-on that allows you to upload your /backup directory to Dropbox. See the repository and README for more details.
The installation repository to add to your hassio instance is https://github.com/danielwelch/hassio-addons
Iâve successfully tested the add-on locally in a vagrant-ized hassio instance, but this is my first published add-on so feedback and error reports welcome.
Currently the upload requires manually starting the add-on, but in the future Iâd like to move toward a more automated solution that, for example, assesses for an upload every X hours. This, combined with automated snapshot creation, should provide a nice automated method for always keeping hassio backed up safely and off-device.
Nice!
I set up automatic backups and installed/configured your add-in
Once the add-in is started, will it keep uploading new backups now or do I need to re-start it? It seems to keep the add-on running once it is started so will it automatically upload new backups if it is running?
ah NVM - it stops it after sync.
Would be nice if it kept running and auto synced new backups.
It also seems to re-upload old backups even though they have not changed instead of seeing they exist and skipping. Also got an error saying it canât create the /backup folder - because it exists after the first run.
Great job.
starting version 3.2.4
> Creating Directory "/backup"... FAILED
> Uploading "/backup/120a8939.tar" to "/backup/120a8939.tar"... DONE
> Uploading "/backup/4008cba4.tar" to "/backup/4008cba4.tar"... DONE
> Uploading "/backup/7cfa9a9b.tar" to "/backup/7cfa9a9b.tar"... DONE
> Uploading "/backup/e39ffe3d.tar" to "/backup/e39ffe3d.tar"... DONE
Some error occured. Please check the log.
Rather than adding extra code for backups, presumably you could set up an automation and just trigger the service from there to do backups at whatever frequency is required?
Sorry what I meant was if Danielâs add-on was exposed as a service, you could use an automation to trigger the hassio-snapshot, then trigger the dropbox copy. No need to set timers etc within the add-on, the frequency is all controlled within the automation.
Having had a quick play it would be good to have some sort of status/timestamp of when the last successful backup was completed.
Good work.
Gotcha. The service hassio.addon_start would be great to use if the addon name was recognised.
Tried {âaddonâ:âdropbox-syncâ}, and also tried {âaddonâ:âdropbox_syncâ} and {âaddonâ:âdropboxsyncâ} but none are recognised.
This is a great idea, and I think is probably the direction Iâll take the add-on. I took a look at the tellstick module which has this sort of functionality, and I think Iâll take inspiration from that and the use of the hassio addon stdin service call to implement this early next week. @DavidFW1960 this will take care of any concern with starting/stopping the add-on. The add-on can just start up, stay on, and listen for messages via a service call.
Ive just redesigned the add-on along the lines of our discussion here and pushed version 1.0.0.
See the README for full details, but some highlights:
Uploads are triggered via service calls to hassio.addon_stdin, allowing for automations and scripting of Dropbox uploads. Thanks @jchasey for the suggestion.
Uploads all .tar files in the backup directory to a specified output path, skipping files that have already been uploaded (No longer uploads entire directory).
Start the add-on once and leave it on. It will listen for messages via service calls, as detailed in the README.
Note: this means no upload is triggered when the add-on is started, only when a service call is made.
Iâve tested this locally and itâs working great. In the process of this redesign, I had to do a deep-dive into how hassio identifies add-ons for service calls such as stdin, start, stop, etc. I share some details about this in the bottom of the readme. Short version: hassio hashes the installed repository name and uses that as an identifier in the format {REPO_HASH}_{ADDON_SLUG}. @DavidFW1960 I guess this is why you were having trouble starting/stopping the add-on previously.
I think this is a great way to come up with a unique identifier, but I donât think itâs great that I had to figure that out on my own, manually hash my github url, and now have to document that hashed result for anyone to actually make a service call involving this (or any) add-on whatsoever.
Since understanding how this hash is generated and being able to produce it for your own repository is required if youâre going to distribute an add-on that uses service calls, it should probably be documented.
EDIT: after more reading, you can actually get this repo identifier from a call to the /addons via the hassio API. I think that would be good to document here.
Additionally, I think itâd be great if, through the Home Assistant UI, some sort of mapping of add-on name to identifier could be displayed. Off the top of my head, it seems like itâd be nice to display it as additional service information/documentation below the service selection field in developer tools, as thereâs nothing currently there. But Iâm unfamiliar with how these are generated and donât know if itâd be possible.
All this is to say: I havenât tested it with the hashed slug yet (the repo_id is local when using a local repository), so please let me know if there are problems with service calls.
First the add-on did not detect there was an update available, but simply by uninstalling the old version, going back to the top level of hassio and re-selecting the add-on for install successfully enabled 1.0.0 to be installed.
Secondly when trying to test with a service call I got the following error:
2018-02-19 19:29:18 ERROR (MainThread) [homeassistant.components.hassio] Error on Hass.io API: STDIN not supported by addon
I tried using Start instead of Stdin just to check and it threw this error:
2018-02-19 19:30:52 ERROR (MainThread) [homeassistant.core] Invalid service data for hassio.addon_start: extra keys not allowed @ data['input']. Got {'command': 'upload'}
Looks like I didnât update the config stub used in the repository to align with the actual config in the dropbox-sync repository. Iâve updated it and specifically added "stdin": true to the config, which should fix this. I didnât version bump because Iâm only changing the repository data, and to be honest I donât know how hassio handles updating that information. Quickest solution would probably be removing and re-adding the repository.
That second error should be expected, since youâre sending invalid input to a service.
I donât know how hassio determines updates are available, so not sure why. I know in the past for me, itâs taken some time for hassio to display an âupdate availableâ dialog in the UI.
I just went into the addin and an update was showing as being available. I updated and started the service no problem. Now I just need to work out how to trigger itâŚ
ah. Itâs also now not creating a backup folder underneath the folder you specifyâŚ
So if I look in my DropBox, I have:
Apps
Apps\Home-Assistant-Backups
Apps\Home-Assistant-Backups\backup - this is where my backups ended up with the original Add-on
Apps\Home-Assistant-Backups\home-assistant-backups - this is where my backups are now going with the new addin.
My Configuration looks like this:
{
âoauth_access_tokenâ: âxxxxxxxxâ,
âoutputâ: â/home-assistant-backups/â
}
So I think the changed way is what makes sense now.
It also worked after removing the repo and adding it in again. Nice.
One further thing - an enhancement - would it be possible to configure how many backups it keeps in the source? So everything could be backed up but then it will delete the oldest backups so you are only keeping âxâ number of backups locally? That would be great.