I understand that, and appreciate the quick response. So as I drilled down on the 62 entities that were showing as unavailable or unknown. It seems like most of them weren’t sensors that I wanted this script to monitor (for example I have several Dark Sky sensors that were reporting as unavailable). So I added all these to the ignored group and restarted home assistant and narrowed it down to 4 sensors (all of which should have been unavailable).
So now that piece of the puzzle makes sense. I just can’t figure out why the automation isn’t triggering a notification?
automation:
- id: sensor_unavailable_notification
alias: "Sensor Unavailable Notification"
description: "Send notification when sensor goes offline."
trigger:
# run whenever unavailable sensors sensor state changes
- platform: state
entity_id: sensor.unavailable_entities
condition:
# only run if the number of unavailable sensors had gone up
- condition: template
value_template: "{{ trigger.to_state.state | int > trigger.from_state.state | int }}"
action:
# wait 30 seconds before rechecking sensor state
- delay:
seconds: 30
# make sure the sensor is updated before we check the state
- service: homeassistant.update_entity
entity_id: sensor.unavailable_entities
# only continue if current number of sensors is equal or more than the number when triggered
- condition: template
value_template: "{{ states('sensor.unavailable_entities') | int >= trigger.to_state.state | int }} "
# create a persistent notification
- service: persistent_notification.create
data_template:
title: "Sensor Unavailable"
message: "### Unavailable Sensors: {{ '\n' + state_attr('sensor.unavailable_entities','sensor_names').split(', ') | join('\n') }}"
notification_id: 'sensor_alert'
- data_template:
message: "### Unavailable Sensors: {{ '\n' + state_attr('sensor.unavailable_entities','sensor_names').split(', ') | join('\n') }}"
service: notify.pushover
Ok…I’ve got everything finally working with some tweaks…Now I’m down to just one last issue. I have 4 group sensors that get flagged as ‘unavailable’ even though I have them listed on the blacklist. Is there any coding I can add to just ignore group sensors all together?
I’m glad I found this thread. Got everything to work but it seems like it took about 2 hours for a zigbee sensor to report that it’s unavailable after I pulled the battery out to test. Any suggestions on how to make that wait a little shorter?
The problem with battery operated sensors is that they don’t keep a constant connection (thus extending the range of the battery).
You have to wait for whatever you’re using as a controller to ‘notice’ that it hasn’t updated in too long and then mark it as possibly absent. If this process takes 2 hours then there’s probably not much you can do.
Could you possibly share your full working code? I’ve got this deployed but not getting persistent or app notifications (badge working). The thread is so fragmented now, I don’t know which bits to include.
@mboarman did you figure out why your messaging wasn’t working? My notifications aren’t working because it looks like the data_template messaging. When I use the one listed here:
After a lot of tweaking here is the package code I managed to get working. Seems to be pretty reliable on the latest builds. Note - I’m no YAML expert, a lot of trial and error with bits and pieces from this thread. Good luck!
sensor:
- platform: template
sensors:
unavailable_entities:
entity_id: sensor.time
friendly_name: Unavailable Entities
unit_of_measurement: items
icon_template: >
{% if states('sensor.unavailable_entities')|int == 0 %} mdi:check-circle
{% else %} mdi:alert-circle
{% endif %}
value_template: >
{{states|selectattr('state', 'in', ['unavailable','unknown','none'])
|reject('in', expand('group.entity_blacklist'))
|reject('eq', states.group.entity_blacklist)
|reject('eq', states.group.alexamailtts)
|reject('eq', states.group.hsfangroup)
|reject('eq', states.group.hslightgroup)
|list|length}}
attribute_templates:
entities: >
{{states|selectattr('state', 'in', ['unavailable','unknown','none'])
|reject('in', expand('group.entity_blacklist'))
|reject('eq' , states.group.entity_blacklist)
|reject('eq' , states.group.alexamailtts)
|reject('eq' , states.group.hsfangroup)
|reject('eq' , states.group.hslightgroup)
|map(attribute='entity_id')|list|join(', ')}}
group:
entity_blacklist:
entities:
- sensor.ssl_certificate_expiry
- binary_sensor.securifi_ltd_unk_model_3257780a_on_off
- switch.dining_room_shuffle_switch
- switch.echo_dot_bedroom_repeat_switch
- switch.echo_dot_bedroom_shuffle_switch
- sensor.dark_sky_icon
- sensor.dark_sky_nearest_storm_distance
- sensor.dark_sky_precip_probability
- sensor.dark_sky_wind_bearing
- sensor.dark_sky_wind_speed
- weather.dark_sky
automation:
- id: sensor_unavailable_notification
alias: "Sensor Unavailable Notification"
description: "Send notification when sensor goes offline."
trigger:
# run whenever unavailable sensors sensor state changes
- platform: state
entity_id: sensor.unavailable_entities
condition:
# only run if the number of unavailable sensors had gone up
- condition: template
value_template: "{{ trigger.to_state.state | int > trigger.from_state.state | int }}"
action:
# wait 30 seconds before rechecking sensor state
- delay:
seconds: 30
# make sure the sensor is updated before we check the state
- service: homeassistant.update_entity
entity_id: sensor.unavailable_entities
# only continue if current number of sensors is equal or more than the number when triggered
- condition: template
value_template: "{{ states('sensor.unavailable_entities') | int >= trigger.to_state.state | int }} "
# create a pushover notification
- service: notify.pushover
data_template:
message: "Unavailable Sensors: {{ state_attr('sensor.unavailable_entities','entities').split(', ') | join('\n') }}"
Thanks @mboarman! I did (finally) figure it out, as well. I had to change the “states.sensor” to “states”, which did the trick. Guess I should have read the Templates documentation, since it clearly discourages that.
From (github)
value_template: >
{% set ignored_sensors = state_attr('group.ignored_sensors', 'entity_id') %}
{% set unavail = states.sensor | selectattr('state', 'eq', 'unavailable')
| rejectattr('entity_id', 'in', ignored_sensors)
| map(attribute='name')
| list
| length %}
To:
value_template: >
{% set ignored_sensors = state_attr('group.ignored_sensors', 'entity_id') %}
{% set unavail = states | selectattr('state', 'eq', 'unavailable')
| rejectattr('entity_id', 'in', ignored_sensors)
| map(attribute='name')
| list
| length %}
Question about integration… how do I save this in my system? I have it saved as unavailable_sensors.yaml. I presently don’t have a separate sensors.yaml, but if I do name it as such, I don’t think I can start it with sensor:, correct?
When triggering for in an Automation, what would my entity_id be?
It’s not an integration. It’s just a simple (kind of…) sensor.
I’m confused about what you are asking.
If you want to you can just put it into your configuration.yaml file under the “sensor:” section just like all of your other sensors.
if you do want to split it out using an !include then the text in the file can’t start with “sensor:”
Are you asking how to use the sensor in an automation?
the entity_id will be whatever you called it in the sensor config. Most likely if you just copied the code above it will be “sensor.unavailable_entities”.
If you are asking something else then you need to clarify what you mean.
I renamed my yaml to sensors.yaml, made the appropriate change in my config file.
In the sensors.yaml file, I started the entry as…
platform: template
sensors:
unavailable_entities:
Here’s my Automation…
- id: '1595647471457'
alias: Unavailable Sensor Alerts
description: ''
trigger:
- above: '0'
entity_id: sensor.unavailable_entities
platform: numeric_state
condition: []
action:
- data:
message: "{%- for item in states -%}\n {%- if is_state('item', 'unavailable')\
\ -%} {{ item.attributes.friendly_name }} unavailable {%- endif -%} A\n{%-\
\ endfor -%}\n"
service: notify.mobile_app_snote8
mode: single
I’ll need to test this out.
Edit: I do receive the following logged error when restarting HA…
Error loading /config/configuration.yaml: in "/config/configuration.yaml", line 11, column 9: Unable to read file /config/sensors.yaml.
Line 11: sensor: !include sensors.yaml
I also have the following 2 ‘Property X is not allowed’ errors in the sensors.yaml file…
Create a new file in the packages sub directory. Copy the entire contents of the file in this updated gist (make sure you copy the RAW text - look for the button) into your new file (I just posted and updated version). The file name does not matter but I would go with something like package_unavailable_entities_sensor.yaml .
Check your config and restart home assistant. You should now have a working sensor named sensor.unavailable_entities you can add to your front end and a couple of working sample automations. This is a good base to create your own automations to work with this sensor.
I’m curious what you mean by this? The only difference in the two examples in your code is the sensor will only look in the sensor domain for unavailable states while the second will look through every domain. This is perfectly acceptable and is actually documented on the Templating docs page.
This is not the same as the issue discussed in the warning box where the syntax used may result in an unknown state for the sensor. This sensor should always resolve to zero.
Personally, I’d like to know about low batteries before they die. Here’s my solution for that. This isn’t anything I’ve really put out there, it’s pretty tailored to my config but it might give you some ideas how to handle it for yours. It creates a persistent notification and an alert push notification.