No, it won’t fail
Edit: Bugger, Tom is quick
No, it won’t fail
Edit: Bugger, Tom is quick
You both were pretty fast. Thanks!
still, be careful…
you’d best check if you have a few templates using {{states|...}
because these will now be updated constantly, creating listeners on all states) and if you have more than a few states, and count all of these, will draw life from your instance…
remember the counter templates: well, until an updated PR is merged, you’d best take these out before updating.
Further to Marius, my cpu usage went from 3 to 4% (used to vary during the day) down to a rock solid 1%
His went up by (if I recall correctly) about 10%
I have about 20 count sensors, most of the rest pretty vanilla
trying to create this automation (and script for manual updating) wasn’t the challenge> Made the as a solid way to do what we need to do here. They are ready to be filled with unavailable entities
My issue now is how to use another integration that would support more than 255 characters…
As said, id the list of unavailable or unknown sensors is quite long due to circumstances to be solved, the 255 char limit makes this rather useless.
Ive thought about sensor.file or even markup lovelace cards, but they all use the sensor domain we are trying to avoid.
Whihc made me wonder: before we had attribute_templates on template sensor, this same 255 char limit plagued us. So cool we could finally use the templates state for the counter, and the attribute_templates for the listings. Couldn’t some kind of attribute be added to another domain, thinking of the input_text here though that might feel awkward, with which we could achieve the same end goal: have a text of more than 255 characters displayed in the frontend, without it being a template sensor…
without that, the whole exercise of rebuilding the former once updated template sensor to an automation with an input_text is futile.
Could this be done via a python_script a la:
count_all = 0
domains = []
attributes = {}
for entity_id in hass.states.entity_ids():
count_all = count_all + 1
entity_domain = entity_id.split('.')[0]
if entity_domain not in domains:
domains.append(entity_domain)
attributes['Domains'] = len(domains)
attributes['--------------'] = '-------'
for domain in sorted(domains):
attributes[domain] = len(hass.states.entity_ids(domain))
attributes['friendly_name'] = 'Entities'
attributes['icon'] = 'mdi:format-list-numbered'
hass.states.set('sensor.overview_entities', count_all, attributes)
which would allow a Lovelace representation like:
What about a simple python_script for those ‘count’ things?
count = 0
attributes = {}
for entity_id in hass.states.entity_ids():
state = hass.states.get(entity_id).state
if state in ['none', 'unavailable', 'unknown']:
attributes[entity_id] = state
count = count + 1
attributes['friendly_name'] = 'Unavailable Entities'
attributes['icon'] = 'mdi:message-alert-outline'
hass.states.set('sensor.unavailable_entities', count, attributes)
well, I be da… we cross posted that. yes, that was what I was looking for. would you help me rebuild my full template sensor like this?
maybe let me create a dedicated thread first?
You could split the output into 255 character chunks
So text 1 = blah [0:255]
text = blah [255:510]
Etc.
The problem with this is that you can only assign ONE VALUE per Template, so you’d have to run the same template multiple times under the action
It strikes me, that you are creating sensors for sensors sake
Or that you like collecting statistics for stuff you don’t need and are never going to look at.
I think your greatest optimisation would be to discard as many sensors as you can
Sorry, but this is what I’ve learned from your many posts on this issue
Thanks but it was too late for the warning. The ‘all clear’ from tom and mutt was all I needed to press the button.
I can see my CPU has gone from around 2-3% to around 7% so I can live with that while I wait for the PR you mentioned, and while I sort out all the issues with my config following the upgrade.
If I was one to complain about breaking changes (and I’m not) I would have plenty to complain about this time around! I’ve got all sorts of new errors and warnings in my logs…
I can see that I’m going to be busy today
You enjoy it really !
I just updated to 0.115.2.
I removed all my entity_id definitions from my binary sensors and sensors. Only a dozen or so required the date or time to be added to the template ( I used {% set update = states('sensor.time') %}
or {% set update = states('sensor.date') %}
. I have reasonably simple templates with no use of all states or expanded groups.
I’ve seen up 50% increase in processor use.
My background processor use range has changed from 2-3% up to 3-4%.
Blatant sensationalism at its finest. Well done sir, nicely done
I’m wondering …
i have a few sensors that evaluate, then i use the sensor itself to update the icon.
Logic tells me that i’m forcing a double evaluation, yet if i split it out into two FULL templates in the same sensor, I’m doing that anyway
Could yours be related to that ???
Edit: but as no states have changed in the first template, does it skio that and jump to a simple evaluation in the second ???
The pitchfork emporium hasn’t been doing very much business lately. Have to drum up sales somehow.
Yes a relatively quiet release.
I also have friendly_name_templates in some cases.
I’m waiting to see if they update correctly as there was one entity not included in the value template that I used to use for additional updates.
From this:
bom_forecast_1:
entity_id:
- sensor.bom_hobart_max_temp_c_1
- sensor.bom_hobart_min_temp_c_1
- sensor.bom_hobart_chance_of_rain_1
- sensor.bom_hobart_icon_1 ### <------ not in the value template, only the picture template
friendly_name_template: >
{%- set date = as_timestamp(now()) + (1 * 86400 ) -%}
{{ date|timestamp_custom('Tomorrow (%-d/%-m)') }}
value_template: >
{{states('sensor.bom_hobart_max_temp_c_1')|round(0)}}°/{{states('sensor.bom_hobart_min_temp_c_1')|round(0)}}°/{{states('sensor.bom_hobart_chance_of_rain_1')|round(0)}}%
entity_picture_template: >-
{{ '/local/icons/bom_icons/' ~ states('sensor.bom_hobart_icon_1') ~ '.png' }}
To this:
bom_forecast_1:
friendly_name_template: >
{%- set date = as_timestamp(now()) + (1 * 86400 ) -%}
{{ date|timestamp_custom('Tomorrow (%-d/%-m)') }}
value_template: >
{{states('sensor.bom_hobart_max_temp_c_1')|round(0)}}°/{{states('sensor.bom_hobart_min_temp_c_1')|round(0)}}°/{{states('sensor.bom_hobart_chance_of_rain_1')|round(0)}}%
entity_picture_template: >-
{{ '/local/icons/bom_icons/' ~ states('sensor.bom_hobart_icon_1') ~ '.png' }}
Fact is that using a python_script to produce an ‘unavailability’ sensor has always been possible. However, it wasn’t needed because it was achievable with a Template Sensor.
Suggestions to now use a python_script, instead of a Template Sensor, implies the Template integration has lost functionality it use to have when entity_id
was available.
Whereas an ‘unavailability’ sensor was previously achievable, in an efficient manner, simply by adding entity_id: sensor.time
, now the equivalent efficiency is via a python_script.
It appears we have gained functionality on one front (superior automatic entity identification) but lost on another (inability to control entity identification).
I have a bunch of counter templates to track # of different types of devices. They used to use entity_id to limit the update to minutely.
Now with the changes in 115 I believe they are done on every state change. For what it’s worth for simply counters by removing the “list” that I see being used quite often you get a 10x speed improvement on the time it takes to execute these.
{% set a = as_timestamp(now()) %}
{{ states.sensor | length }}
{{ states.zwave | length }}
Time: {{ as_timestamp(now())-a }}
The below run 10x slower due to the list; which doesn’t change the results.
{% set a = as_timestamp(now()) %}
{{ states.sensor | list | length }}
{{ states.zwave | list | length }}
Time: {{ as_timestamp(now())-a }}
My point is I think a lot of people have templated a lot of things and blazingly copy/pasted samples from this forum which perhaps are not the most optimal or best performing ways to accomplish certain tasks.
It pays to use the template editor to test results, test what it is intercepting, and using some simple way to benchmark the time it takes.
Perhaps we need some way to indicate to users they have such ineffective templates… A template optimizer so to speak
And/or possibly the fact that these templates are so popular means that they should be exposed as ‘normal’ sensors by the core without the need for templates, python scripts or whatever.
Although I agree there are sub-optimal ways of composing templates, the inclusion of list
in your example never mattered in previous releases.
Why? Because your template wasn’t evaluated as frequently as it is now.
Previously, your template was evaluated every minute, so whatever ‘extra time’ involved to process list
was hardly noticeable.
Now your template is evaluated every time any sensor or zwave device changes state. That’s far more evaluations so the aggregate of the ‘extra time’ becomes noticeable.
I believe the deprecation of entity_id
has had far greater implications than originally anticipated. My first post suggests it’s simply a matter of including sensor.time
in the template. However, in practice, there’s more to consider if the template relies on wholesale inclusions of all domains (i.e. using states
or its derivatives).
I still believe all of this new complexity would be eliminated by the re-instatement of entity_id
.