2024.6: Dipping our toes in the world of AI using LLMs šŸ¤–

The most useful application of LLMs in HA would be to be able to build automations and scripts based on prompts.

ā€œCreate an automation that turns on the living room light when Iā€™m in the room and turns it off automatically when the upstairs is vacant for at least 30 minutes. If itā€™s after 8pm turn on the lamp insteadā€

Itā€™s a relatively fixed problem space in terms of variables. It would be a lot easier than using copilots hacked together YAML since you could have deep knowledge of the triggers, conditions, and actions available.

Iā€™d prioritize that higher than Assist integration IMO.

7 Likes

Edit dashboard > edit specific view > background tab.

Hmmmmm. So which add-onā€¦? The url does nothingā€¦

Looks like a minor bug caused by the TasmoBackup Add-on will come up on this one.

Ticket (kinda) openā€¦ youā€™ll get a warning an add-on (not identified on-screen correctly) has a ā€œmissing repositoryā€.

1 Like

Great release again!
For section dashboard view: I would really like to make a 2 column wide section, should I submit a request on github?

Fairly uneventful update. :tada:

Schedule helper UI is broken in 2024.6.0 though.

2 Likes

Itā€™s coming. :smiley_cat:

1 Like

Yeah, thatā€™s totally fair. Iā€™m not used to HA development prioritizing an external service first, but I can understand that it might be easier to start with that for this particular functionality.

Mostly, I just wish this had been addressed a little more explicitly. I donā€™t disagree that ā€œdipping our toesā€ implies a beginning, but I would have loved an acknowledgment in the blog post, like: ā€œWeā€™re launching this feature with support for external LLM services to begin with. In the future, Home Assistant will also support local-only LLMs like Ollama.ā€

Ollama is not an LLM but a service that is able to run most local LLMs. Itā€™s been added to Home Assistant in 2024.4: Ollama - Home Assistant

As we mentioned in the live stream, and will further expand on in our deep dive blog post tomorrow, weā€™re collaborating with NVIDIA to bring control of Home Assistant to Ollama. It doesnā€™t support it out of the box nor are any of the main open source models tuned for it. Itā€™s something that needs to be built from scratch and something NVIDIA showed a prototype of last week: https://youtu.be/aq7QS9AtwE8?si=yZilHo4uDUCAQiqN

9 Likes

Fair! Iā€™m excited to learn more. As for the Ollama bit, you got me; Iā€™m not entirely sure how all the pieces intersect, and mostly I just meant ā€œsome sort of local LLMā€.

That is amazingā€¦ congrats! I am watching NVIDIA key developers doing stuff with Home Assistant, same folks at the company that just passed Appleā€™s valuation, over USD 3 Trillion and are work with Musk companyā€™s Tesla, X ā€¦ keep it up and feel good that you are doing some important and fun work!

1 Like

How about adding Azure OpenAI Conversation to Home Assistant Cloud? In that way I can afford more bill than $65 since I can discontinue my OpenAI subscription. Of course it should be an option for who wants to use.
We already have Google Assistant and Alexa on Home Assistant Cloud. Why not OpenAI?

1 Like

Big thank you to all of the developers for another great release. Excited to install it tonight after the toddler falls asleep.

Matt

While I like some of the intention behind integrating AI into Home Assistant, I think itā€™s kind of misleading to say that the LLM will be ā€œunderstanding the intention behind the spoken commandā€. LLMs run statistical calculations to determine the most likely response to a query. They have no intelligence as we know it. No true knowledge. They just predict the most likely sequence of tokens.
The demonstration of the AI voice assistant, while interesting, left me wondering. Is it better to say ā€œIā€™m doing a meeting. Make sure people see my faceā€. Or to simply say ā€œTurn on my webcam lightā€. How subtle do you expect home commands to be that they require the energy and processing overhead required by an LLM. I fail to see the use cases. I hope that the Home Assistant team concentrates on continuing to make home automation that fits the habits of regular people.

16 Likes

Strongly seconded.

3 Likes

Yep, I would much rather HA Voice be able to handle minor mistakes in my speech locally and turn things on/off etc, than rely on an LLM to figure out what I want from some obscure request.

10 Likes

One idea for background images - allow an image of a room picture to a room card or card section. This will give a visual cue for room controls rather than text or or just an icon.

One control on images would be a transparency llevel, so not to make the background images too distracting.

1 Like

The new 24.6 release is fantastic and makes me really eager to get my hands dirty and replace all those Amazon Echos with something that actually can understand what I meant to say.

And here comes the problem - replace them with what? Are there still no aesthetically pleasing devices out there? Or at least some nice 3D printed cases to put the ESP32 wire hell? Any ideas are welcome, thank you.

Perhaps look at some of the entries from the recent Voice competitionā€¦ there ARE nice examples available to print.

The way I see it, the true use cases of LLM here are:

  1. To make Assist more lenient to minor mistakes: If one said the voice command slightly wrong, the command will still get executed.

  2. To reduce the mental load of the user: The worst part of voice assistants is the discoverability of voice commands, that is, ā€œwhat can I say?ā€ LLMs can make Assist more friendly to those who canā€™t (or donā€™t bother to) remember all the possible intents as well as the names of the hundreds of devices one might have in their house.

  3. To reduce the need to make frivolous decisions: You know how easy it is to get into choice paralysis when youā€™re browsing Netflix? Or Spotify? Sometimes a user may just want to play something relaxing, when they want to relax and not spend energy making yet another choice.

  4. To chain multiple commands: It is now possible to ask multiple questions and requests in one sentence, instead of having to wait to wake Assist multiple times.

Only then, the last use case is to show off how cool HA is.

9 Likes