In binary_sensor, even if is_state is always true or false, the template is not

As above initial logs from the UI, timeout-connection-errors in general on several templates/services etc. … i would interpret this as the cause that your “template-sensor” can’t receive status of the "
binary_sensor.smoke_1 " , so you get your “unknown”

EDIT: Yes im only guessing :slight_smile: … and would have jumped direct to my automations, and secured that they would ignore this state, or throw a message/Warning, as it could be do to the fact that the "
binary_sensor.smoke_1 " was Smoked ! :wink:

I did, but then I was like, maybe I can solve some, and as a computer scientist, I don’t like warning either.
Old habbits

If it’s used in an automation, then what’s wrong with triggering from off to on? This ignores “unknown” or “unavailable” states.

Here is how I detect if a door switch is on for one second:

Because on to off or on to unknown or on to unavailable are all the same to alert.
But from on to “something else than off” for the smoke detector itself is a different story.
Moreover, as the off is coming from the last will in mqtt, it can be before the unavailable or after and I wanted to trigger on the first of the 2 and only once.

That is what I’m trying to achieve, with timer, knowing that my D1mini is sending an on state instantly then every 10 seconds, knowing that the D1mini can burn, or the detector but not the D1mini (yet) or, or, or

And that is the reason why I wanted to “externalise” the alert mechanism outside the “basic” AI that I’m building around my sensors, the last_updated and other things like ping sensors and other stuffs and simplify it to the maximum:

on : my house is burning
off : stop sounding, in any case either the house is just a pile of ashes or it was a false alarm. But it is too late anyway.

I think you should move to an automation then. If you have the template you were trying to use, I can help you tailor an automation (with a template that combines the result) that turns on/off a input_boolean and that will always be available & known. I do this in 1 case with my washing machine because it’s finicky and I don’t want unknown values mucking with my wash cycle detection.

Apparently during reload templates are unavailable not unknown, during startup they are unknown.

Thanks a lot!
All these things about sensors are sometimes a bit complicated for me…

Thanks but at the end I got a trigger sensor from Taras and it is working well.
I marked that topic as solved but it was not really a solution, more a workaround.

I did open a new one, this one, to go to the bottom of the reason why it was not always true or false.

Even if I don’t have the end of the story yet, I’d assume that it is about the performances of my RPi and will see how it behaves the day I’ll change it. It will be fairly easy to assess as today, this sensor is unknown at least 2 or 3 times per day.

P.S.: For who is interested in the trigger sensor, here is the link to the topic, look at the post marked as solution.

If I may ask a follow-up on the topic of unknown states as something I’ve wondered about, but never asked: Some sensors will have their states restored. Under what circumstances is that the case and are template sensors part of that? I think the latter part is a yes, since I have a trigger-based binary sensor that does and I remember the release that brought that change. If states can be restored, why in general wouldn’t the last known state then be kept instead of being unknown? Basically, it would then be the same as an MQTT sensor with a retain flag. Or is it the case that it is restored, but restoring itself takes time and during that time it would be unknown?

1 Like

The unknown comes from the time when the integration is booting up, depending on the speed of your machine, you may or may not see this state. My trigger based template sensors usually don’t have these state changes in the logs or database. But CommandCentral from the dev team confirmed that this unknown to known state change does not trigger automations.

Unknown or unavailable. I have never seen a sensor go to “Unknown” unless I physically remove it (I.E., unplug it).

With this, do you mean including to restore states? Are states restored by the integrations or by the core?

The current code leaves this decision up to integrations. HA core handles state restore but does not decide which entities to capture and restore state for. The python classes for entities which restore state inherit from RestoreEntity to make this happen.

Trigger template entities do restore state. This is because their state can’t be easily reconstituted otherwise. Their state isn’t updated until the trigger fires and that could be a long time after startup.

Non-trigger template entities do not restore state. Their state can be easily reconstituted by simply evaluating the template. Which is what HA does on startup already for all non-trigger template entities.

Because HA is already doing enough writes to the disk, it does not want to add more unless absolutely necessary. The way restore state works is every time the state of an entity which needs to be restored is updated HA writes its state to .storage/core.restore_state. So basically every state change of that entity is written to disk twice - once to history in the DB and once to this JSON file.

There’s no reason to do this if the state of the entity can be easily reconstituted as part of integration load. By re-running a template, contacting a service/device to ask for its current state, etc.

Even if the state can’t be easily reconstituted sometimes the integration author decides that it shouldn’t restore state. This can happen if the entity has very frequent updates. That would be a lot of disk noise or very little value since the integration will receive a state update shortly after startup.

Or perhaps the restore state feature really can’t work for a particular integration. Like the filter integration for example, which calculates the state based on a collection of recent state updates. Even if it kept the last known state through a restart that wouldn’t be helpful because it would’ve lost the collection of state updates it used to figure out that state. Now it becomes just one data point when before it was based on a whole bunch.

There are other examples. The point is its a bit complicated and as a result the integration has to decide what to do. Feel free to submit a Feature Requests for a particular integration if you think it should be restoring state and isn’t though.

2 Likes

From dev to dev: Thanks for that in-depth answer. I can see that various options are needed and there’s no (single) default behaviour. I have no specific issue with an integration. It was about understanding the general behaviour better — which turns out are valid in plural.

1 Like

And that’s the tricky part. According to the logbook, it is not switching to unknown. Nevertheless, it is switching to off (from off if you look at logbook, from unknown if you look at history).

And that transition is triggering the automation “to off”.

Weirdly enough (or logically, I don’t know), if I do the automation “to unknown” or “to unavailable”, that one is not triggered.

So, it is like the automation is more like the logbook, not detecting the transition to unknown but detecting the one to off from “I don’t know” as the previous state, according to logbook, was already off.

That’s so confusing.

Thank you for that explanation, I was wondering why the trigger template given by Taras was working better than the non-trigger template I did. Now I know. As the smoke detector is expected to not trigger hopefully, it is definitely something to put as a trigger template.

So even if HA was switched off for 24h, after restart it has the value from before and the history is showing no unavailability or unknown gap for these 24h and shows the same/old state for this 24h as well?

Why is the DB-Value not taken and this double-store needed?

I’d guess there’s a gap but idk, haven’t really tried that. Give it a shot I suppose.

Well for one, it may not be in the DB. Users can control what is is or isn’t recorded to the DB via the options in recorder. If the DB was used for restore state then excluding an entity there would prevent it from working correctly as its state could no longer restore.

Plus lots of users use external services like MariaDB and Postresql for DBs. If HA depended on the DB for restore state then a poorly timed restart or connection issue with that service (during HA startup) would completely hose HA. Its window to restore state would be missed and many things would just be in the wrong state with no way to recover what they’re supposed to be.

I also think in general the HA team does not want anything in the DB that is integral to getting HA into a working state. That means no config, auth info, and (it would seem) state restore info. By exclusively using the db for history, the worst issue a connection issue or downtime could cause is missing a few events and the history UI doesn’t load for brief window. HA won’t break if the DB goes missing no matter when that happens.

Perhaps if sqlite3 was the only option for DB some of this could change. Then HA would no longer have to worry about the connection issues or downtime with the DB service since its fully internal and local. But the DB service option means HA has to treat it as a service and assume there will be issues to handle.

2 Likes

I experience something similar with a template sensor based on an attribute. I’ve got an automation with a trigger on the update for that sensor value . When reloading my template entities the automation is triggered. In the trace of the automation I see the value changes to null and then back to the original value. I’ve tried to catch that one by adding an empty not_from and not_to option to the trigger. But that didn’t work. End up with adding a template condition: {{ trigger.from_state is not none and trigger.to_state is not none}} But expected this to work according to the documentation of the state trigger.