You are getting 54 years because Unix timestamps start at Jan 1 1970 and that is 54 years ago. And the reason you are getting that is because your states('sensor.mt610_uptime') is returning a bad value and so the as_timestamp function is defaulting to the value you gave it, which is zero (aka Jan 1 1970).
A good practice would be to not use default values when you would be better off if the code resulted in an error so that you can correct it. In this case, an error would have been much more helpful than some confusing and incorrect result.
{% set uptime = (now().timestamp() - as_timestamp(states('sensor.mt610_uptime'))) // 100 %}
@mekaneck I am not getting an error state. The count updates in developer tools and when I check mib2 sysUpTime I get the following… Name/OID: sysUpTime.0; Value (TimeTicks): 1235 hours 54 minutes 32.44 seconds (444927244) which also corolates to the system uptime as reported by the device… not 54 years.
It appears that the divider is just not working, however if I change it in the template tool the result does react. I have never played with raw uptime in snmp and conversions before as our enterprise snmp monitor package at work makes the conversion. Normally when writing a probe for it I end up making temperature conversions, pressure conversions and such. To me, the above indicates that there must be a standard conversion process since mib2 is a fairly generic mib.
If your sensor can go to unavailable or unknown and you want to handle that gracefully, you can use an availability template with {{ has_value('sensor.mt610_uptime') }}
So, I just skimmed back in this thread to see what exactly you’re trying to do. Sounds like you have a sensor that reports uptime in 100ths of seconds, and you want to convert that to minutes, hours, days, years, right?
The best way to do uptime sensors is to have them report a fixed timestamp for when the device turned on. The front end will display past datetimes in a nice-to-read format (e.g. “last week”), or you can use any number of cards or templatable cards to format it exactly how you want. The reason to do it this way is so that you don’t have the backend calculating and recording new sensor states every update interval (30 sec for most core polling integrations) which just clogs up storage with useless data and continues to calculate it even when you’re not looking at it.
For your SNMP sensor, since it supports templates as shown in the docs, you could calculate the start time by subtracting the reported uptime from now():
device_class: timestamp
value_template: >
{{ ((now().timestamp() - value | int / 100) | round(0)) | as_datetime }}
For the front end use in a template card if you don’t like the default display format (or if you really wanted the template code for the backend) you could use the following, modified from what @dbrunt provided:
{% set uptime = value | int // 100 %}
{% set years = uptime // 3153600 %}
{% set days = uptime % 3153600 // 86400 %}
{% set hours = uptime % 86400 // 3600 %}
{% set minutes = uptime % 3600 // 60 %}
{{ '%dy ' % years if years else '' }}{{ '%dd ' % days if days else '' }}{{ '%dh ' % hours if hours else '' }}{{ '%dm' % minutes }}
Note that the word value may be replaced with something else depending on the card you use.
If you are certain you don’t have any syntax errors in your template code, the issue will most likely be that sensor.eero_uptime state value is not something that can be converted to an Integer/whole number. Otherwise, it will result in sensor.eero_router_uptime having a value of unknown
Make sure that sensor.eero_uptime value does not contain text characters. It should just be a number like 871313; which would be converted to: 1 week, 3 days, 2 hours, 1 minute, 53 seconds.
The only other way this can happen (that I know of) is if sensor.eero_uptime isn’t providing a value at all.
In your yaml file you have a space between states and ('sensor.eero_uptime') and there shouldn’t be a space there.
But in the future please provide some more detail besides ‘it isn’t working’. You should have errors in your logs which would be useful, as well as a description of what happened that you didn’t expect to happen.
Thank you for the suggestion to check the logs (and everything else)! I am still learning HA and haven’t spent much time exploring the logs. But it showed me that I had a duplicate entity (i.e. I had sensor.eero_router_uptime and sensor.eero_router_uptime_2). Not exactly sure how that happened, but no doubt due to my tinkering. I deleted the first one and when I reloaded my configuration yaml file it all started working.
I would suggest adding a unique_id: to each of your sensor’s configurations in the future. It is just like it sounds, type in anything that is unique (i.e. not used in any other unique_id elsewhere n your config). Normally people use a UUID (Google it and you’ll get a site that generates them).
If you don’t have a unique_id, then any time you modify that sensor in yaml and then reload the yaml, another sensor (with _2, _3, etc.) will be created, because the system has no idea if you modified the original sensor or deleted it and made a new one. So it assumes the latter.
- sensor:
name: "Eero Router Uptime"
unique_id: 0512b301-7547-46b0-aa1a-531ae3cd7f0a
state: >
{% set uptime = states('sensor.eero_uptime') | int %}