I’ve been using HA 103 for over 1 year without upgrades. My home wasn’t falling apart.
In fact, it was because I run HA on an Ubuntu LTS server, along with other services. Python was outdated.
If you want stability (and not spend week-ends fixing home automation), you shouldn’t rely on cloud based devices, worse thing can happen, like service getting shut. Firmware or API breaking changes are not the worse !
Devices that do not rely on cloud services are my first requirement.
I have one TPLink HS110. Works locally. Didn’t upgrade firmware since I got it, must be 2 years ago. “Never touch a working system”.
Of course, security issues can happen. Most IOT devices are weak, so I don’t trust the firmware updates. All my devices are in a dedicated VLAN/SSID to which only HA can connect. Only on needed ports, for each device.
It’s complex setting it up, but once running, you can forget about it. Or you can choose to upgrade frequently and play with newest add-ons !
Nice, that upgrade again broke my homeassistant : / Feels like I spend a day after every upgrade, this time I had to give up after two days of trying to revert, narrow down the config to a minimal state … and even after several hours no idea anymore how to debug. The log is showing only the following hint, with no clear indication what’s actually wrong:
2020-12-20 22:21:00 ERROR (MainThread) [homeassistant.bootstrap] Error setting up integration frontend - received exception
Traceback (most recent call last):
File "/srv/homeassistant/lib/python3.7/site-packages/homeassistant/setup.py", line 64, in async_setup_component
return await task # type: ignore
File "/srv/homeassistant/lib/python3.7/site-packages/homeassistant/setup.py", line 158, in _async_setup_component
await async_process_deps_reqs(hass, config, integration)
File "/srv/homeassistant/lib/python3.7/site-packages/homeassistant/setup.py", line 344, in async_process_deps_reqs
hass, integration.domain
File "/srv/homeassistant/lib/python3.7/site-packages/homeassistant/requirements.py", line 75, in async_get_integration_with_requirements
hass, integration.domain, integration.requirements
File "/srv/homeassistant/lib/python3.7/site-packages/homeassistant/requirements.py", line 121, in async_process_requirements
if pkg_util.is_installed(req):
File "/srv/homeassistant/lib/python3.7/site-packages/homeassistant/util/package.py", line 54, in is_installed
return version(req.project_name) in req
File "/srv/homeassistant/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3078, in __contains__
return self.specifier.contains(item, prereleases=True)
File "/srv/homeassistant/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/specifiers.py", line 703, in contains
item = parse(item)
File "/srv/homeassistant/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 31, in parse
return Version(version)
File "/srv/homeassistant/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 200, in __init__
match = self._regex.search(version)
TypeError: expected string or bytes-like object
Everything seems to act, only the frontend is not coming up (or better responding with a 404: Not found)
Any idea highly appreciated! Otherwise I’ll probably have to start with a clean install…
Can someone please give a little more clarity on the new number: integration? The docs only talk about using the service to set the value but how do we create a number entity? I thought maybe this was supposed to be an integration where we can keep any number like an input_number but without having to worry about setting a min and max.
I’m keen to know how to use this as I see it as a way to save data for future reference, but just need to know how to create the entity.
For example currently I save the daily rainfall value to an input_number such that when the rainfall value is reset for the new day, the input_number is a record of yesterdays rain. I was thinking this could now be done with simply number:
No, but before I started editing storage files, on a whim, I decided to just try to re-download the latest version of HACS from github and that seems to have solved the “invalid config” issue.
I restarted again (…which I had done several times before then) and once HA finally came back up HACS still thought I was on 1.8.x (since that was the last version stored in the registry) and asked me to update. After I did that everything seems to be back up and running OK and it shows as up to date.
What doesn’t make sense was that literally the only thing I did was hit the “recreate” button. And when HA restarted on the latest version HACS was dead. I didn’t do anything with HACS.
Luckily I have the experience to be able to fix this kind of thing. There’s no way the “target audience” would have any clue what to do.
Go to supervisor then System than under Host System next to Ip address click Change change than expend IPV4 and change IP DHCP to static and under DNS server add google DNS : 8.8.8.8 Reboot HA and then update supervisor than update Home Assistant to version 2020.12.1
I keep getting this error message each time HA starts (HA Core 2010.12.1):
Logger: homeassistant
Source: runner.py:99
First occurred: 4:59:40 PM (1 occurrences)
Last logged: 4:59:40 PM
Error doing job: Task was destroyed but it is pending!
This update broke my config (Rpi3 with database on a MariaDB server).
The update to 2020.12 was made 2 days ago and I thought there was no problem : no error in logs, and lovelace was restarting well…
But the next day, I was unable to connect to the web GUI. After a hard reset, I had Lovelace working about 5mins then nothing : only error messages and panels refusing to open themselves (so no access to supervisor or server config).
I was able to make a rollback to 118.5 (SSH connexion worked during 5mn after a restart), but it solved nothing.
So I had to make a fresh install then use a copy of backup I had on my desktop PC.
Perhaps it’s related to the database ? I had a huge one, and I saw there was a schema update.
I checked the CPU usage and space on SD card, but they seemed to be OK.
Next time, please tell it more clearly when you make a “breaking change” on core components. I appreciate you enthousiasm, but I prefer a synthetic array of major big changes than a speech on the new app name (yeah it’s right, we can’t call it 1.0).
The good part is that restoring settings works !
The lesson : run updates only under the sun if your HASS manage your heaters.
After a long back and forth reinstallation with a new SD card and Image 3b 64Bit 4.20. Little by little everything restored.
It was not possible to restore the snapshot, i.e. via Samba.
Database broken, so deleted.
Now almost everything is back to where it was before.
Unfortunately now SWAP goes to 100% very quickly, then also CPU load and that was it. No more reaction.
Had a total of 12 add-ons, nine of which were in operation before the update.
Now I can only run four at the same time in order to avoid that SWAP and CPU start up again.
Google Drive Backup has to stay off because this add-on starts several other add-ons when it is started and this again increases the load and SWAP.
Get everything on rpi 3B +, so get rpi 4?
I just found out that you need to stop the db prior to creating the snapshot. I use the Google drive snapshot add on and it is very easy to set this up. Once I did this my restore worked with no issues.
I don’t really have any special insight into this process but personally I wouldn’t expect a release until this OOO message gets taken down from the repo.
It would nice if you could add custom attributes to specific entities (and perhaps have useful defaults) that the recorder (and related, like influxdb) components could consult to decide to save the state of entities when they change. Right now, there’s configuration state specific to an entity spread all around. It’s more of an issue in these modern times when entities spring into existence without you editing a YAML file. You have to discover these new entities that get created by integrations and then remember to update the recorder settings. And in my case, for influxdb. Usually I miss these until it’s too late and a series has already been created in influxdb…
In this way, one could decided that the uptime sensor by default doesn’t get stored in the recorder database, and you’d end up with a sensor that more directly measures what’s of interest.
It sounds like you want an include filter instead of an exclude one in recorder and influxdb.
If you exclude the entities you don’t want then all new entities are included by default unless you specifically exclude them as well. If you include the entities you do want then all new entities are excluded by default unless specifically included.