as for the supervisor updater sensors, I need to adjust it to:
{% for addon in state_attr('sensor.supervisor_updates','addons') %}
{{ addon.name }} in '{{addon.state}}' state and
{%- if addon.update_available == false %} Up to date: {{ addon.version }}
{%- else %} Update available: ({{ addon.version }} -> {{ addon.version_latest }})
{%- endif %}
{%- endfor %}
using version and version_latest, to pickup any changes. Strange how this seems to vary over the various install methods.
Homeassistant OS 117.0b4 here.
Yep, youâre correct. Looks like thereâs actually an easier way to do it now though at least since they just return a boolean field for update_available. The curl command needs to either be updated as you describe or to this, the old one wonât work anymore:
Will update the main post in a sec. Since the data model for the addons changed that also means other things looking at that sensor need to be updated (alerts, node RED flows, etc.). Going to take me a sec to update those.
I know this post is really old at this point, sorry has been a busy fall.
But FYI, actually doesnât seem like updates can be tracked for this new container. When the other containers were introduced there was a matching addition to the CLI and API that could be used to find out info about it. There doesnât seem to be anything about âobserverâ in the CLI or API though. So either its updates are not independently trackable or thereâs a new way to track them Iâm not aware of.
Ok phew, everything updated. Actually made a number of changes besides just the bug fix, all the others are minor though. Locally I converted to using Threshold sensors instead of Template Binary Sensors when appropriate so now the package is updated for that. Also updated to leverage the new update_available field instead of comparing version identifiers as that seems preferrable.
@pergola.fabio pinging you since we were discussing on that other post. You should be able to pull the new package or just tweak the curl command in that one sensor and be all set.
Ah perfect, thanks! And I see there is an API for observer in there so added a sensor for that now as well @DavidFW1960
Hi, I experience that only than âHACSâ sensor provides me with an update. Since a while (donât know when but longer time ago) I do not seem to get a notification for any of the other sensorsâŚ
Is this by design versus the recent update of home assistant? And others experiencing this?
Iâm trying to get snapshots list from supervisor and would like to get latest slug so I could restore it automatically with an automation if necessary. Iâm unable to even get the parsed listing in to HA.
Iâm not really sure what to make of your âfound_snapshotsâ file, not really understanding how thatâs connected to your use case. It sounds like you want to find the latest snapshot and store its slug in a sensor. Based on that the command I would use is this:
This gets the list of snapshots, sorts the list by date ascending, then just keeps the last one (which is the newest by date). You can then leverage that in a sensor like this:
Seems like that gets you everything you need to do what I believe youâre trying to do.
Is there some reason your capturing a list of snapshot slugs in a text file and using it to filter the list? If so youâll have to explain why so I can try and figure out whatâs going wrong. Doesnât seem necessary to get the slug of the latest snapshot though since the data of each snapshot includes a date.
Thank you for your effort and this did indeed work.
Little bit more background for this: Iâm trying to build redundant/highly available HA backup server which would launch automatically if the main server drops offline. Otherwise the backup servers overlapping automations and processes are turned off with automation. Updating configs to two server is quite hindrance so I was looking to this matter by trying to restore latest snapshot automatically in backup server.
After total of 24 hours trying to solve this Iâve given up for now. Some sources indicate that calling ssh command could bypass limitations, but I see here so much breaking possibilites that I have decided for now to do the âcloningâ of main server to backup server by hand.
My daily samba backup -goes straight to backup server so this is quite good emergency solution for now. Well maybe this way I have more control over the backup server and I can test&try with it and I donât have to think about the automatic backup system running over my unfinished tinkering.
On my phone at the moment so I canât test this myself but I notice the URL you are using to restore seems to be incorrect. Here is the API for interacting with snapshots via supervisor. There are two APIs you can use to restore a snapshot via the API -
/snapshots/<snapshot>/restore/full
/snapshots/<snapshot>/restore/partial
The slug goes in the URL as a resource path parameter, not the body. The body is just the password for a full restore or the details of what to restore for a partial restore (and the password).
If youâre trying to restore on another system you might have to use the upload snapshot API to get the snapshot into the correct place first unless the two share a folder for backups.
Also just an FYI, to get the value of the latest_snapshot sensor your template should be {{ states('sensor.latest_snapshot') }} not {{ latest_snapshot }}.
Cool idea though! If you get it all working you should make a thread or guide about your setup. Would be great to have things auto fail over to a backup replica.