Perhaps look at some of the entries from the recent Voice competitionā¦ there ARE nice examples available to print.
The way I see it, the true use cases of LLM here are:
-
To make Assist more lenient to minor mistakes: If one said the voice command slightly wrong, the command will still get executed.
-
To reduce the mental load of the user: The worst part of voice assistants is the discoverability of voice commands, that is, āwhat can I say?ā LLMs can make Assist more friendly to those who canāt (or donāt bother to) remember all the possible intents as well as the names of the hundreds of devices one might have in their house.
-
To reduce the need to make frivolous decisions: You know how easy it is to get into choice paralysis when youāre browsing Netflix? Or Spotify? Sometimes a user may just want to play something relaxing, when they want to relax and not spend energy making yet another choice.
-
To chain multiple commands: It is now possible to ask multiple questions and requests in one sentence, instead of having to wait to wake Assist multiple times.
Only then, the last use case is to show off how cool HA is.
- āA new Home Assistant voice control hardware device running Home Assistantās local smart home voice assistant is planned for release at the end of the year.ā
Home Assistantās next era begins now - The Verge
Controlling devices now with ChatGPT is great and works wonderfully. However, I noticed Iām unable to receive sensor info from it. Is there some setting that needs to be enabled or has this not implemented yet? For clarity, what Iām looking to do is to ask āHow much time is left for the washing machineā and it should look at the Washing Machine Remaining Time sensor and tell me what the number is.
Update: I just realized that some sensors are exposed and some are not. How do I force expose the sensors that arenāt available?
Update: Solved it, found the āexposeā option in the entity setting. I just never looked at this option before.
You can already add a picture to a room, and itās then automatically used in a room card.
After installing 2024.6 update, I received a notification saying this:
"Add-on ADB - Android Debug Bridge has been removed from the repository it was installed from. This means it will not get updates, and backups may not be restored correctly as the supervisor may not be able to build/download the resources required.
Clicking submit will uninstall this deprecated add-on."
I donāt want to click submit since this add-on seems to be still working according to logs. I have to admit I am not sure if thereās something better now, but I remember this being a requirement so I can control my Nvidia Shield media player device with Home Assistant. I donāt want to lose that optionā¦
Does having the ability to control Dashboard visibility finally enable me to permanently hide the Overview dashboard? That is the most annoying thing ever in HA. Especially because even if I hide it in the mobile apps it just keeps getting back. Especially annoying for my parents who are a bit less technically savvy.
Whatās the overview dashboard? Are you talking about the autogenerated page?
since we now have 2 ways to conditionally show cards:
is there any difference between the use of a type: conditional
and the new visibility
option?
- type: entities
entities:
- input_boolean.test
- type: conditional
conditions:
- condition: state
entity: input_boolean.test
state: 'on'
card:
type: entities
title: Type Conditional
entities:
- light.zitkamer
- type: entities
title: Visibility
visibility:
- condition: state
entity: input_boolean.test
state: 'off'
entities:
- light.zitkamer
seem to be identical, and I can not spot any margin/padding/gap differences?
( there always was the small gap when using conditional and condition was false, but that minor nuisance has been fixed)
Yaaay! Great work everyone involved in the various updates and, as always, well written by the author! Since I found this blog Im addicted to these posts. Itās like Christmas several times a year Absolutely love it!
IMO this is the case.
I use this page as an admin, but itās a bad idea to show it to all the members.
Create a new dashboard, make the new dashboard default, move the premade to the bottom and the normal users will just see it in the side menu. I delete it thou, donāt see the point to use it at all.
Do you have any local LLM projects in mind?
I am in the same place as you regarding LLMs and privacy, but I also ask myself if a local LLM is even feasible given the computational power (or lack thereof) of most small server setups.
An RPi even struggles with timely responses to the normal Assist, yet an LLM.
The new visibility option is great, but when casting the dashboard with home assistant, the space occupied by the card is still here. (but on the phone or computer it work has intended). By the way, I had to switch language from french>English for the option to show in the card editor.
Is the new visibility
option suitable only for top-level cards?
Canāt get it to work for an entity
card inside of grid
which works perfect with conditional card in this place
Hey, very excited to see where this is going, will do some upgrading and looking around this weekend!
I tried to research this, but I wanted to see if the community has any examples or best practices: is there a project or any guidance for using Amazon Echos to hook into Home Assistantās voice intents? We all know Alexa has been struggling recently, and I have 3 at home (connected with the HA integration), but Iām looking for some way to change the interaction to talk to HA directlyā¦ I am envisioning a path where you
- trigger Alexa (or an integration) to HA directly
- issue a voice command
- send that voice command to HA
- have the new LLM / Voice Assistant etc. module make sense of my request
- act on the request (fx lights off in rooms with no occupancy)
- and respond through the Echo device.
Is this something anyone has managed to wire together?
Thanks,
Gyula
Yes, but the problem is in the fact that even if I set something else as the default dashboard, the overview will after a while come back as the default option. Not to mention that this needs to be done for each user separately.
Have never in my 8+ years with HA had a setting change by it self. Have had up to 8 different dashboards and switched between them as default during reorganizeations and that have never changed once from what I have set.
There is a custom integration called Fallback Conversation. With it, voice commands will be tried locally first. If they cant be handled locally, they will be passed on to the service of your choice. Local AI, google etcā¦
m50/ha-fallback-conversation: HomeAssistant Assist Fallback Conversation Agent (github.com)
I have to admit Iām struggling to find a use case for LLMs in HA, too. HA will never be our primary computer platform. We already have cell phones, tablets and laptops. HA should be focused on, well, home monitoring and automation.
If/when the whole process goes local, I can see HA replacing cloud-based voice assistants. Personally, I donāt use one now, but if I did Iād want a totally local solution. But of course this isnāt really what HA was built for. Adding things just to show off how cool HA is, isnāt really a good long-term strategy.
Finally, the problem with demanding more robust hardware is that it detracts from what drew many of us to HA in the first place; the ability to run on small, inexpensive and easy-to-maintain devices like the RPi. If HA paints itself into a corner where only geeks running VMs on beefy servers can use it, I think it will die off pretty quickly.
HA is best when itās an appliance with a focus on home automation, not a do-everything portal to the world.