In general, this can occur when there is a genuine Cross Site Request Forgery, or when Django’s CSRF mechanism has not been used correctly. For POST forms, you need to ensure:
Your browser is accepting cookies.
The view function passes a request to the template’s render method.
In the template, there is a {% csrf_token %} template tag inside each POST form that targets an internal URL.
If you are not using CsrfViewMiddleware , then you must use csrf_protect on any views that use the csrf_token template tag, as well as those that accept the POST data.
The form has a valid CSRF token. After logging in in another browser tab or hitting the back button after a login, you may need to reload the page with the form, because the token is rotated after a login.
You’re seeing the help section of this page because you have DEBUG = True in your Django settings file. Change that to False , and only the initial error message will be displayed.
You can customize this page using the CSRF_FAILURE_VIEW setting.
I was looking for a way to back up my HomeAssistant to a remote place, and this was super easy to set up. Thanks!
I specifically wanted something automated, so I added a couple of automations via NodeRed:
First, I have an inject node with no payload that repeats at 2am every day. This triggers a call service node with a domain of hassio and a service of snapshot_full (because I want a full snapshot, but you could make this a partial if you wanted).
Next I have another inject node with no payload that repeats at 2:30am every day. This triggers a call service node with a domain of rest_command and a service of google_backup to kick off the sync job.
I could probably chain the sync job right off the snapshot job but I’m not super savvy with NodeRed and honestly this works great as it is so I’ll probably just leave it like this.
Edit: Okay, I just tried it and if you just trigger the rest_command service from the snapshot_full service, it’ll call it right away without waiting for the snapshot to complete, so that doesn’t work. I’m sure there’s something I could do to wait for the completion but like I said, it’s working just fine with the timers…
I have been using this add-on for a while with lots of success and last month it quit working. Looks like form the debugging that its timing out. I have moved the timeout to 500 but it is still doing it. Any ideas.
[2020-02-20 12:20:32 -0500] [51] [INFO] Booting worker with pid: 51
INFO:root:No local_settings to import
DEBUG:root:backup fromPattern: /backup/*.tar
DEBUG:root:backup backupDirID: 1lHfpVa3sD881FZF3f9YKVR4gOpnlgLOR
DEBUG:root:backup user_agent: http://192.168.2.27:8055/
INFO:oauth2client.client:Refreshing access_token
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
WARNING:googleapiclient._helpers:build() takes at most 2 positional arguments (3 given)
INFO:googleapiclient.discovery:URL being requested: GET https://www.googleapis.com/discovery/v1/apis/drive/v3/rest
INFO:googleapiclient.discovery:URL being requested: GET https://www.googleapis.com/drive/v3/files?q=name%3D%27a7821344.tar%27+and+%271lHfpVa3sD881FZF3f9YKVR4gOpnlgLOR%27+in+parents+and+trashed+%3D+false&spaces=drive&fields=files%28id%2C+name%29&alt=json
INFO:root:Backing up /backup/a7821344.tar to 1lHfpVa3sD881FZF3f9YKVR4gOpnlgLOR
DEBUG:root:drive_service = <googleapiclient.discovery.Resource object at 0x7fc8f3324990>
DEBUG:root:MIMETYPE = application/tar
DEBUG:root:TITLE = Hassio Snapshot
DEBUG:root:DESCRIPTION = Hassio Snapshot backup copy
DEBUG:root:media_body: <googleapiclient.http.MediaFileUpload object at 0x7fc8f350f110>
INFO:googleapiclient.discovery:URL being requested: POST https://www.googleapis.com/upload/drive/v3/files?alt=json&uploadType=resumable
[2020-02-20 12:21:13 -0500] [19] [CRITICAL] WORKER TIMEOUT (pid:51)
gb_debug = true
Setting up debug logging
[2020-02-20 17:21:15 +0000] [51] [INFO] Worker exiting (pid: 51)
[2020-02-20 12:21:15 -0500] [53] [INFO] Booting worker with pid: 53
INFO:root:No local_settings to import