I think this type of issue is the core of the problem. Running a second instance with alpha or beta will cover a lot of hardware and custom components, but won’t catch items with hardware running local on a pi for example (unless that hardware is duplicated as well – and in the case of zwave that won’t work).
That’s why I run the alpha on production. Too much work to get zwave working in 2 HAs.
Edit, you should be able to connect to your production instances zwave now that I think about it. But you have to run zwavejs2mqtt
That’s what I’m struggling with. The family can’t handle outages but I can. So ideally the family would use the stable instance and I’d use the alpha.
Based on this, I’m not sure beta (or alpha) testing would be for anyone truly relying their HA for critical functions. The primary instance must be stable and the secondary insurance would be the test environment. If that was made possible somehow I think there would be much more beta (or alpha) testers.
Regarding single command to roll back, hindsight is 20/20. I have a backup/restore process in place, but it at the time required core to be functional enough to use the UI. I’m planning to remedy that now of course, but I don’t think that nullifies the possibility of still not having my process be “good enough”. So the secondary test env really is what I (and probably others?) require to take the plunge. This is speculation of course, but if y’all are hurting for beta testers, it does stand to reason that it’s too hard to remain stable so people don’t take the risk.
I may have a unique setup, but in my amcrest config I have each camera listed twice, the first is the high resolution stream and the second is the low resolution stream. Since upgrading to 2021.9.0 (also tried 2021.9.1) the second camera stream will not load. I get an error in the logs that says “Platform amcrest does not generate unique IDs. ID AMC007UZ192PDH39WU already exists - ignoring camera.front_door”. I have had this config for a long time and this is the first time I have encountered this issue. I don’t know if this is a product of the Amcrest version moving to 1.8.0 or possibly from the Custom integrations: Cameras breaking change. Does anyone else have a similar config and or have a similar error since 2021.9.0?
Below is a sample of my amcrest config.
- host: 192.168.1.61
port: 80
name: Front Door
username: !secret CamUser
password: !secret CamPW
stream_source: rtsp
binary_sensors:
- motion_detected
- host: 192.168.1.61
port: 80
name: Front Door LR
username: !secret CamUser
password: !secret CamPW
stream_source: rtsp
resolution: low
What gives?
entity: binary_sensor.updater
state: off
release_notes: https://www.home-assistant.io/blog/2021/09/01/release-20219/
newest_version: 2021.8.8
friendly_name: Updater
No. I always run the beta and I did get stuck in the boot loop in b2 this time but just rolled back… total grief about 5 mins. And then I was able to assist the devs on fixing the problem for everyone. This is my way of contributing to HA as a non dev. It’s for sure more productive than bitching about testing being someone else’s job. I do also run alpha on a test dev instance but it doesn’t have my main config on there. It’s not a burden running beta and dealing with any breakages. You also have the direct attention and help of the devs in identifying and ironing out any issues. It also seems to be the people bitching are those who never lift a finger and seem to be proud they wait till a .x release or wait a week… just contribute yourself. That’s open source.
I still don’t understand the messaging here. So a bug made it to production and it was bad enough to pull the release. And the response is "well users didn’t test well enough, it’s clearly their fault.
That’s not how software development works, open source or otherwise. Developers have post-mortems for non-trivial incidents to identify processes and better testing strategies to avoid regression in the future.
The people “bitching” are just reporting the bug. I saw maybe 1 user who was heated in their report of the bug, but everyone else I saw at least was literally just expressing that they too hot the bug and providing the information they could to help narrow down the issue.
I’m glad your experience with some unrelated regression in the beta was ok, but as I mentioned, apparently my backup/restore strategy wasn’t “good enough” to deal with this particular regression of the frontend being totally inaccessible. But I suppose that’s entirely my fault and I should just know all the unknown unknowns regarding what could possibly happen and all the other users who hit the same issue and felt the same pain are just bitching for no good reason since obviously they all only we’re out for 5 minutes. Clearly the issue was bad enough to pull the release, so I think your victim-blaming is unfounded here.
I have zwave working on 2 HAs…
Care to elaborate? Is it using HA OS? Are you using the z-wave js addon or “manually” running it on the device with the hardware? If you’re using HA OS and the addon, I’m extremely interested as that’s probably the only thing blocking me from running secondary test instance.
I run my zwavejs on a Pi that’s separate from my HA instances. It runs zwavejs2mqtt in 2 docker containers, one for my prod zwave network, and the other is for my test zwave network.
I don’t like my zwave coupled to my HA instances so I can put it in a small docker swarm and let it move around as I do OS upgrades
Is anyone able to get the Number or Select templates working?
I tried to copy them into my template YAML to test them out but configuration validation says they aren’t correct.
Does anyone have another working example I could look at? Or is config validation not aware of the new template types?
On mobile, but for examples, the config should start like this:
template:
- number:
- name: …
In configuration.yaml
So I have:
template: !include configs/templates.yaml
Then in templates.yaml
# Working
- sensor:
- name:
state:
# Not working
- number:
- name:
state:
Try:
sensor:
name:
state:
number:
name:
state:
It might me a while coming to Home Assistant due to fragmentation but would it would be useful, I have a “smart” water meter ( Melb,AU ) and this is what I get from my supplier:-
2021.9.1 is out, so you should be able to update now.
Remove the dash in front of number but keep it on sensor
Got it working, turns out it didn’t like the double quotes on the example template (state) lines.
state: "{{ (( state_attr("light.wled", "Speed") / 255) * 100) | round }}"
and
state: "{{ state_attr("light.wled", "effect") }}"
from the example code had to be changed to
state: '{{ (( state_attr("light.wled", "Speed") / 255) * 100) | round }}'
and
state: '{{ state_attr("light.wled", "effect") }}'
For it to be happy.
Was that in the blog? Probably should fix it if it was. Or was that in the docs?
It’s in the blog, you can see it at the top of this thread as well.