That’s assuming I’ve updated…which I haven’t…I’m not brave enough to update as soon as the update is available.
I always wait for several days to let the dust settle.
That’s assuming I’ve updated…which I haven’t…I’m not brave enough to update as soon as the update is available.
I always wait for several days to let the dust settle.
I did this as well in the past, but since around 0.110 or something I even started to use beta versions as they became so stable lately in my experience. And if it fails, just downgrade
Not really recommended hardware, but whatever floats your boat. The python deprecation will probably be in the next release then, just wanna say it will not take long. Also you’ll have the same issue in a year again when they move to 3.9 and then again one year later etc. If you continue using the Turris router, I suggest biting the bullet and update python now, document the procedure or write a script to prepare you for the next year.
Same here what I do is during the beta I start to watch the github issues and beta chat channel and after release i wait a few days to see if the integrations i use get any new issues…if i see a milestone get created i normally wait for that to release before updating. If there is a frontend milestone i alwyas wait on those.
I also have HACS issue. Is this issue solved?
This is the error I get with HACS:
Logger: homeassistant.setup
Source: core.py:174
First occurred: 8:09:19 PM (1 occurrences)
Last logged: 8:09:19 PM
Error during setup of component hacs
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 213, in _async_setup_component
result = await task
File "/config/custom_components/hacs/__init__.py", line 24, in async_setup
return await hacs_yaml_setup(hass, config)
File "/config/custom_components/hacs/operational/setup.py", line 83, in async_setup
await async_startup_wrapper_for_yaml()
File "/config/custom_components/hacs/operational/setup.py", line 120, in async_startup_wrapper_for_yaml
async_call_later(hacs.hass, 900, async_startup_wrapper_for_yaml())
File "/usr/src/homeassistant/homeassistant/helpers/event.py", line 1179, in async_call_later
return async_track_point_in_utc_time(
File "/usr/src/homeassistant/homeassistant/helpers/event.py", line 1133, in async_track_point_in_utc_time
job = action if isinstance(action, HassJob) else HassJob(action)
File "/usr/src/homeassistant/homeassistant/core.py", line 174, in __init__
raise ValueError("Coroutine not allowed to be passed to HassJob")
ValueError: Coroutine not allowed to be passed to HassJob
i didnt upgrade yet, i am using HACS too, just seen other people also compainign about that same error
Yes I need to remember:
NEVER, BUT NEVER, upgrade to .0 version
Ahhh ok I see. There’s really nothing that says you need to add blueprints: into configuration.yaml. I didn’t have to but I also have “default_config:” in mine and after updating blueprints appeared.
I don’t blame you for waiting. Usually I take a look at the changes, then the breaking changes to see if there’s a lot that might effect me. If it’s not much, I shutdown my HA VM, take a snapshot, bring it back up again and upgrade. I also take a backup, just in case because you can never be too careful
.PI4 8GB. Boot from SSD. Update yesterday to HassOS 5.8 using “ha os update --version=5.8” from ssh. No problems. Today update to 2020.12.0. No problems.
Indeed. This is not a preferred way now. But was in past, when I started with HA. And I like one universal system rather than more dedicated boxes.
I know that I need to solve the old python issue, but I wanted to know, if is the time now, or it could wait for x-mass pause
I had to look that up. Nice looking router.
If you are running stable debian, why not run HA in docker? (Having said that I am not 100% sure what architecture that machine has).
LXC is already a container and is directly supported by OpenWrt.
Updated to 2020.12.0 with no issues.
It’s just not supported by HA.
Trying out the area configuration for entities. I have 3 template entities (one sensor, one binary_sensor and one cover) and for each I have a unique_id. I edited each of these entites in the UI to change the area to an appropriate value. Opened up the Overview (default Lovelace page) and the UI didn’t consider the area in the grouping of elements on the screen.
Could it be that some things (UI) that use area were not upgraded to look at the entity for area first then look at the device for area if entity area is not defined?
@KennethLavrsen you can exclude entities in the recorder settings. Personally though I’m going to try out template-entity-row. Either option should alleviate your concern regarding logging unwanted data.
I personally prefer to show cert expiry as a gauge card
Marius and @Holdestmade,
The reason it won’t update is that states.sensor.uptime.last_changed never changes
It is set once when the system reloads and gives the UTC time of that event
The template with now() in it uses the token gesture to Taras about getting template sensors to update on a time basis.
What I’m using is : -
{% set secs = as_timestamp(states.sensor.time.last_updated) - as_timestamp(states('sensor.uptime')) %}
{% set days = (secs / (24 * 60 * 60)) | int %}
{% set plural = ' days ' if days > 1 else ' day ' %}
{% set hrmn = secs | timestamp_custom('%H:%M', false) %}
{{ days ~ plural ~ hrmn if days > 0 else hrmn }}
Which also makes the result human readable (and doesn’t use a ‘workaround’ )
Had to tweak my uptime sensor but so far so good, I’m liking this update.
I’m glad that absolute times are being used instead of releative, just be sure timezones + time is correct and all should be good.