2024.6: Dipping our toes in the world of AI using LLMs šŸ¤–

Thx for mentioning! Iā€™ll wait to install the new release. Iā€™m using RPI 8gbā€¦

Looks to be fixed and working normally again!

I agree, I think the technology needs to mature a little longer before I implemented in my production HA setup. But I do have a ā€œTestā€ HA setup that I will use to play with this.
In the mean time, I have been using Willow as the voice assistant front end for about a year, and it works great with HA. They recently released Willow-Autocorrect which solves the issue of natural variances in speech. For example, I will usually say ā€œTurn on the living room lightā€, but my wife will say ā€œTurn on the light in the living roomā€, the Willow-Autocorrect will interpret these and send the correct sentence to HA.
So far it has worked flawlessly. :slightly_smiling_face:

1 Like

Anyone facing problems with OpenAi conversation? I have an OpenAI paid suscription and I get always this error :

Sorry, I had a problem talking to OpenAI: Error code: 404 - {ā€˜errorā€™: {ā€˜messageā€™: ā€˜The model gpt-4o does not exist or you do not have access to it.ā€™, ā€˜typeā€™: ā€˜invalid_request_errorā€™, ā€˜paramā€™: None, ā€˜codeā€™: ā€˜model_not_foundā€™}}

No matter what model I use (tried, from gtp-3 to gpt-4o)

Done:

Will the AI API for Assist contain some kind of built-in ā€AI Prompt Generatorā€ or filter to convert intent and generate both effective and short text promts in order to not only give a good experince but also help keep costs down when using cloud chatbots that are paid-for per token?

That is, try make the promts that are sent into online AIs as short as possibe to save the users some money.

I know it will be an balance act as while longer promts will help provide more context for the AI but they will also cost more in token fees.

While you are mostly correct you are disregarding a couple of things. LLMā€™s are language models, the knowledge they posses IS language.

They can not make decisions on their own but they can understand what your saying, how your saying it and what you most likely mean based on the patterns used. Saying an LLM has no knowledge is a false premise, thatā€™s like saying the neural networks that do facial recognition donā€™t work because they donā€™t have any knowledge.

HA can feed an LLM the context of every device in your home in the instructional prompt that is sent before the user prompt, which gives it the knowledge to act upon those devices, and even make a guess on what device you were talking about if what you were asking isnā€™t clear.

I understand a lot of people think these things are gods for some reason, and they are wrong. But a lot of people think they are just dumb angry pixies also, and those people are just as wrong.

Every single thing you have interacted in a meaningful way for the past 15 years and probably more has been done with neural networks and machine learning(what AI was called before the normies heard about it) Every algorithm, whether its google search, ALL facial recognition, licence plate readers, behavioral analysis, pick and place machines etc has been functioning via machine learning. They are all weighted toward what they are tasked with. Its getting hot now because the industry secret cat is out of the bag, which is why google themselves said they have ā€œno moteā€

They are gaining traction now, because you can run a small pointed neural network on a potato, because of the exponential growth in computational power. See people, pets, packages, car recognition on the unifi cloud key of all things that are all done on device, an appliance actually.

1 Like

Hello @mib1185 ,

I have an Aqara alarm integrated via HomeKit, and I do not need to specify any settings in my ā€˜configuration.yamlā€™ file to use it. With the new HA version, I am unsure how to disable the required arm code.

How can I change this default value to false to disable the code requirement from the UI or in my automations?

Additionally, I find it odd that the code is required for arming but not for disarming, considering security reasons.

Thank you for your help.

Created Hunter Hydrawise issue with 2024.6 Ā· Issue #119051 Ā· home-assistant/core Ā· GitHub

Again, another amazing release from the HA team.

I have to say A BIG THANK YOU!! to @karwosts for developing collapsible sections in blueprints. I get asked about this all the time, and seeing it go from a feature request to being developed and now implementedā€¦ nice. As always, I have tested it, and it works flawlessly. I will be rolling this out in many of my blueprints. It makes them so clean and very easy to use now. On behalf of everyone using blueprints and all the blueprint developers, THANK YOU!!

Watching the YouTube releaseā€¦ JLo, you are a weapon.

Blacky :smiley:

2 Likes

Wow, what an amazing release! Thanks so much to everyone whoā€™s been working on this - you are all epic. :smiley:

How complicated is it to add the autocorrect to home assistant

Could something like a Coral TPU enable a RPi to run a local model?

Since upgrading to 2024.6 I am experiencing many issues/outages.
I am new to Home Assistant but not networking or home automation.

So far, Govee, Shelly, Sonos, Nanoleaf are all failing, plus not sure if related but the Symfonisk Remote Blueprint is configuring but unable to see any of the buttons presses visible in Home Assistant Device page and the Automation is not showing either.

Not sure if these are related yet.

Need to figure out how to roll back to 2024.5.

Looks like I just run ha core update --version 2024.5.6 ?

the ha core update --version 2024.5.6 command will revert you but not solve the issuesā€¦

have a look here How to help us help you - or How to ask a good question, post your logs, share what config and setup you have and who knows maybe the issues(s) you have can be solvedā€¦youā€™ll never know if you donā€™t ask :grin:

1 Like

Did you miss this bit?

Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. Weā€™re collaborating with NVIDIA to get this working ā€“ [they showed a prototype last week.] (https://youtu.be/aq7QS9AtwE8?si=yZilHo4uDUCAQiqN)

2 Likes

That appears to have been added after publication and after my comment, as this snapshot from the Wayback Machine shows. You can compare the timestamp of my comment with that of the Wayback snapshot: the passage wasnā€™t there when I commented.

3 Likes

Well, I guess they sort-of responded to your comment, then.

2 Likes

If you want to use Gemini in Europe, there is no free plan available. But, if you activate first Gemini free credit in a project in Google Cloud console, you will get three months of free Gemini in the Gemini studio. Then activate Gemini API through the studio console, and here you are, 3 months of Gemini credit.

First try are quite awesome, despite some issues with special characters in room names, like CheminƩe in French.

But so cool to ask Assist : What is the warmest room ? Open covers in the kitchen half way ā€¦

So so cool. Sure I would prefer to have it running a local LLM but it a first approach to replace Siri in day to day use.

1 Like

Good morning all. Itā€™s a Saturday here and my weekday routines have all fired! Seems something is up with workday_sensor

1 Like