One huge request/idea but wondering if there is any chance of lead developers looking into adding some kind of optional visual programming system (VPS) / visual programming language (VPL) also known as diagrammatic programming, graphical programming and block coding, as a advanced flows option and alternative to simple flows in the automation editor, similar to âGoogle Blockyâ (such as example âSnapâ which is used in educational development platforms like Scratch), or Node-Red?
Block objects style flow-based visual editor UI for automations and scripts using graphical blocks via a drag-and-drop frontend included by default in Home Assistantâs frontend interface.
As proof-of-concept Blocky style visual programmering is available in both OpenHAB and Domoticz (competing home automation platforms):
Please also consider adding a ESP32-C6 or ESP32-H2 to the same PCB board as another co-processor but one that can be more or less dedicated as a Thread Border Router (for Thread based Matter devices) or Serial-over-IP remote Zigbee Coordinator following the same concept as Espressifâs ESP Thread Border Router / Zigbee Gateway board reference hardware (which combines one ESP32-S3 with one ESP32-H2 for this purpose) which is also sold as both Thread Border Router and Zigbee Gateway development board.
Having a extra module with Thread/Zigbee radio would allow users who have many of them to use to use a few as Thread Border Routers and one as a Zigbee Coordinator and the rest as stand-by fail-over nodes which the config can be restored to for redundancy and higher robustness. Depending on how many users have then some could also be used as Bluetooth Proxy as well, (though not a good idea to run multi-protocol on the same microcontroller/radio as most already know, so another excise to buy many of them ).
That second ESP32 would add more available GPIO pins so it could also act as a multi-sensor (if adding ability to add temperature/humidity, mmWave presence, and lumens/light sensors, etc.) that would not directly take resources from the main ESP32 that runs the voice assistent.
Suggest use similar concept with two ESP32 in the ESP Thread Border Router/Zigbee Gateway Board reference hardware which combine a ESP32-S3 with a ESP32-H2 radio module:
I also have many variants of Google Home smart speakers and various Google Nest smart display models too, so would be very nice to get a voice satellite model with a similar sized screens (one 7-inch size screen and one 10-inch sized screen). Would be awesome if couls just have several drop-in replacement PCBs for the most common models.
A cool option would therefore be to be able to buy ready-made updated replacement custom PCB variant of Onju-Voice with XMOS accelerator chip for drop-in circuit board swapping in older and newer hardware-generations of Google Nest Mini (second-generation with barrel-plug power-supply) / Google Home Mini (first-generation with USB-Micro power-supply. Check out:
And the very much related request/discussion here:
Question about Assist. Having it working all locally is great, but tbh, the hardware for it is not the best yet. Seeing Sonos integration is 33rd based on usage (pretty high, considering there are preinstalled integrations, such as Sun, Input boolean, TemplateâŚ), means there is a lot of users having Sonos. I guess Sonos will not be keen on allowing more assistants run on their speakers, but question is: Did HA team at least to reach to them, to check if itâs possible having HA Assist on Sonos speakers?
Homeassistant develops great! Iâm really happy that it focuses more on aesthetics and such. Love the possibilities to set up things in ui. However I hope the path for manually editing doesnât get closed. Especially for bulk tasks this is indeed a real time saver!
My wishes, beside improvements: Less breaking changes. I still sometimes get the impression this is not the last, but among the first options.
What I am wondering:
Who decides about such roadmaps? (Like said, I like whatâs on display)
Who decides (maybe finally), which features get implemented, which not?
I remember e.g. a call for sharing our dashboards and wishes, though I couldnât participate I really liked that.
But for other things, I missed that - or it didnât happen.
Wouldnât it be a good move, to let the community decide more, if they e.g. prefer focussing on voice input, or dashboard improvement?
And - who can get contacted for general, really general, questions?
I two times asked about statistical data we can decide to share. Never got a reply. Who is to get contacted directly regarding such questions?
I donât get it (or maybe itâs that I do get it?), Iâll take âOkay Nabuâ over 'Okay Google" any day. I have my own wake word in mind, but I get that the back end has to be built first with consistant results⌠then you can integrate differing wake words (read: triggers). With the direction HA is going itâs only a matter of time before I can say âHey C-3PO, whatâs upâ and get to hear a breakdown of what the day looks like.
Hi, first of: Thank you for writing this and putting effort into communication and clarify. Really appreciate it!
Question: Can someone please explain why HA devs put so much effort into development of ZHA when Zigbee2mqtt seems more capable? Most reviews favor z2m before zha, atleast those that I have read. Wouldnât it make sense to pool resources into only z2m? I mean the old zwave implementation was dropped in favor of zwave-js. You can always play the choice card but since both are open source it feels unnecessary, what am I missing?
I just want to chime in that Iâm also very much looking forward to the voice hardware. I can relate to much what Iâm reading about the choice of wake word, I really hope that will be easy to change or choose another one.
Yes, same with zwave js. That started external and grew past their own implementation so in time their own implementation was dropped and zwave js became the official one. Right?
If the maintainer of the integration WANTS to be in core is step 1. HA cannot go and suck up peopleâs projects⌠Besides the fact they donât have enough core paid Devs to do everything already.
Im fine with Z2M and Domo and all the other zigbee integrations (i think thereâs at LEAST three now) being options.
I personally use Z2M myself and am fine with ZHA being the one of choice.
Openzwave made way for ZWaveJS as far as I understand because ZWaveJS was actively being maintained and brought up to modern needs while OZW wasnât so. There was a needâŚ
Here you have two or more perfectly good ways and one is well maintained in Core and one is well maintained out of core.
Iâll just come out and say it. The voice hardware will sell no matter what. This very much feels like a vocal minority. Nobody liked Google or Alexa or Siri on launch this isnât any different. Sure you can ask people and they will say they hate it. Hell you get people complaining about hey vs ok after all these years. After some voice commands that all goes away. Also correct me if Iâm wrong but the team never said no not going to happen right? So whatâs the problem lol?
I think youâre making incorrect assumptions. I
both have some concerns about how the wake word decision was made, and will also buy several of the voice hardware.
Having just committed my first blueprint (Iâd made some in the past for myself only), I can agree with the issue about making blueprints easier to use.
I think a major part of that is having some built-in way to package multiple elements in a blueprint. Because some automations need to read and write data to helpers in order to circumvent the limited scope of variables within an automation, or you would want to split up long code that you need to use often into separate scripts that can be called from the main automation with certain parameters/fields.
A framework that would allow such things within the blueprint yaml, or a way to package the helpers and scripts together with the blueprint would make it easier to write and use blueprints IMO.
I am incredibly happy about the fact the privacy and user segmentation will soon be in the focus. I find this to be the only thing thats really missing in HA which is such a great system. I love the work on sections. I think the integrations (not problem of HA) should deserver more attention from developers. But it will come HA is bitcoin (gold is about to be overtaken, gold standard will be over) standard of IOT platforms
Hi, great work this year!
In the automation builder UI, Iâd love to be able to add conditions to a (group of) triggers, right into the trigger section. Other way to say the same: group triggers inside of the triggers section and associate conditions to them, while the other trigger can have other conditions.
This is because I feel like âend ifâ section can only be used when there is only one consistent group of triggers, and cannot be used if there are several (groups of) triggers requiring various conditions. In that case, the âthen doâ section becomes a huge choose block associating triggers with their condition and the actual actions get burried into building block subsections.
Just going to pipe up again, on the issue of reliability.
Matter currently stops a running automation if there is a CHIP Timeout - even though continue on error is enabled for that specific action.
This urgently needs to be fixed, as it has been on-going for at least 6 months at this point.
Great stuff. SSO would be very useful! And easier to manage users across home network. I would love to use authentik in my home network (oidc) to log in HA users