I’ve created an addon that automatically uploads Home Assistant snapshots to Amazon S3 for offsite storage. The addon monitors the /backup directory and as soon as a snapshot is created, the addon performs the upload.
I created this addon because I’m a big proponent of Amazon Web Services (AWS) and I prefer to use their services when possible. Use of this addon requires knowledge of AWS services (IAM and S3) and is tailored to individuals with that experience. If you don’t have experience with AWS I would recommend using one of the other addons for offsite snapshot backups.
Use Home Assistant automations to trigger snapshot creation on whatever schedule you like. S3 lifecycle polices can be used to limit the amount of storage used in S3.
If you’d like to give it a try, just add my repository to your Home Assistant addons at https://github.com/gdrapp/hass-addons and follow the instructions to install Amazon S3 Backup. If the addon doesn’t show up, you first need to enable Advanced mode in your Home Assistant user profile.
I’d love any feedback and am happy to accept pull requests with changes. Thanks.
Hi gdrapp,
thanks for creating this!
It works if i execute it manually - it uploaded all snapshots to my S3 bucket; unfortunately, when I create a snapshot and wait for the file to get uploaded the following is logged into the log:
Add-on version: 1.1
You are running the latest version of this add-on.
parse error: Expected string key before ':' at line 1, column 4
[22:42:53] ERROR: Unknown HTTP error occured
System: (armv7 / raspberrypi4)
Home Assistant Core: 0.117.5
Home Assistant Supervisor: 2020.11.0
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
[cont-init.d] 00-banner.sh: exited 0.
[cont-init.d] 01-log-level.sh: executing...
Log level is set to INFO
[cont-init.d] 01-log-level.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[22:42:54] INFO: Starting Amazon S3 Backup...
WARNING:__main__:Local file /backup/158fcdc4.tar not found in S3
INFO:__main__:Monitoring path /backup for new snapshots
INFO:__main__:Processing new file /backup/ea1dcf58.tar
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/site-packages/watchdog/observers/api.py", line 199, in run
self.dispatch_events(self.event_queue, self.timeout)
File "/usr/lib/python3.8/site-packages/watchdog/observers/api.py", line 368, in dispatch_events
handler.dispatch(event)
File "/usr/lib/python3.8/site-packages/watchdog/events.py", line 537, in dispatch
_method_map[event_type](event)
File "/usr/bin/amazon-s3-backup/amazon-s3-backup.py", line 34, in on_created
self.process(event)
File "/usr/bin/amazon-s3-backup/amazon-s3-backup.py", line 53, in process
upload_file(Path(event.src_path),
File "/usr/bin/amazon-s3-backup/amazon-s3-backup.py", line 145, in upload_file
snapshot_detail = supervisor_api.get_snapshot(slug)
File "/usr/bin/amazon-s3-backup/supervisorapi.py", line 87, in get_snapshot
return response.get("data")
AttributeError: 'NoneType' object has no attribute 'get'
````
___________________
whats wrong?
Thanks for help
I was looking for a tool like this. Similar to you, I like using AWS and would prefer to backup to there. Thank you for making the tool!
Firstly a suggested feature:
Scheduling - i.e. check every X minutes/hours/days
At the moment, it appears to be constantly running, which is an unnecessary drain on resources (please correct me if I am wrong).
Secondly, I am having an issue as no files seem to be uploading. I created a new IAM user and set the permissions to list all buckets but full access to only one bucket. I have used the same policy for other similar tools for Windows and it works fine. I don’t think this is the issue, but I am not sure what else it could be. It mentioned something about “boto3” not being found?
Here is the information from the log:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-banner.sh: executing...
-----------------------------------------------------------
Add-on: Amazon S3 Backup
Automatically backup Home Assistant snapshots to Amazon S3
-----------------------------------------------------------
Add-on version: 1.1
You are running the latest version of this add-on.
System: Home Assistant OS 7.4 (aarch64 / raspberrypi3-64)
Home Assistant Core: 2022.2.9
Home Assistant Supervisor: 2022.01.1
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
[cont-init.d] 00-banner.sh: exited 0.
[cont-init.d] 01-log-level.sh: executing...
Log level is set to TRACE
[cont-init.d] 01-log-level.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[11:12:40] INFO: Starting Amazon S3 Backup...
[11:12:40] TRACE: bashio::config: aws_access_key
[11:12:40] TRACE: bashio::jq: /data/options.json if (.aws_access_key == null) then
null
elif (.aws_access_key | type == "string") then
.aws_access_key // empty
elif (.aws_access_key | type == "boolean") then
.aws_access_key // false
elif (.aws_access_key | type == "array") then
if (.aws_access_key == []) then
empty
else
.aws_access_key[]
end
elif (.aws_access_key | type == "object") then
if (.aws_access_key == {}) then
empty
else
.aws_access_key
end
else
.aws_access_key
end
[11:12:42] TRACE: bashio::jq: /data/options.json if (.bucket_region == null) then
null
elif (.bucket_region | type == "string") then
.bucket_region // empty
elif (.bucket_region | type == "boolean") then
.bucket_region // false
elif (.bucket_region | type == "array") then
if (.bucket_region == []) then
empty
else
.bucket_region[]
end
elif (.bucket_region | type == "object") then
if (.bucket_region == {}) then
empty
else
.bucket_region
end
else
.bucket_region
end
[11:12:42] TRACE: bashio::config: storage_class
[11:12:42] TRACE: bashio::jq: /data/options.json if (.storage_class == null) then
null
elif (.storage_class | type == "string") then
.storage_class // empty
elif (.storage_class | type == "boolean") then
.storage_class // false
elif (.storage_class | type == "array") then
if (.storage_class == []) then
empty
else
.storage_class[]
end
elif (.storage_class | type == "object") then
if (.storage_class == {}) then
empty
else
.storage_class
end
else
.storage_class
end
[11:12:42] TRACE: bashio::config: upload_missing_files
[11:12:42] TRACE: bashio::jq: /data/options.json if (.upload_missing_files == null) then
null
elif (.upload_missing_files | type == "string") then
.upload_missing_files // empty
elif (.upload_missing_files | type == "boolean") then
.upload_missing_files // false
elif (.upload_missing_files | type == "array") then
if (.upload_missing_files == []) then
empty
else
.upload_missing_files[]
end
elif (.upload_missing_files | type == "object") then
if (.upload_missing_files == {}) then
empty
else
.upload_missing_files
end
else
.upload_missing_files
end
[11:12:43] TRACE: bashio::config: keep_local_snapshots
[11:12:43] TRACE: bashio::jq: /data/options.json if (.keep_local_snapshots == null) then
null
elif (.keep_local_snapshots | type == "string") then
.keep_local_snapshots // empty
elif (.keep_local_snapshots | type == "boolean") then
.keep_local_snapshots // false
elif (.keep_local_snapshots | type == "array") then
if (.keep_local_snapshots == []) then
empty
else
.keep_local_snapshots[]
end
elif (.keep_local_snapshots | type == "object") then
if (.keep_local_snapshots == {}) then
empty
else
.keep_local_snapshots
end
else
.keep_local_snapshots
end
Traceback (most recent call last):
File "/usr/bin/amazon-s3-backup/amazon-s3-backup.py", line 12, in <module>
from s3bucket import S3Bucket, S3BucketError
File "/usr/bin/amazon-s3-backup/s3bucket.py", line 3, in <module>
import boto3
ModuleNotFoundError: No module named 'boto3'
[cont-finish.d] executing container finish scripts...
[cont-finish.d] 99-message.sh: executing...
[cont-finish.d] 99-message.sh: exited 0.
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.