Thx for mentioning! Iāll wait to install the new release. Iām using RPI 8gbā¦
Looks to be fixed and working normally again!
I agree, I think the technology needs to mature a little longer before I implemented in my production HA setup. But I do have a āTestā HA setup that I will use to play with this.
In the mean time, I have been using Willow as the voice assistant front end for about a year, and it works great with HA. They recently released Willow-Autocorrect which solves the issue of natural variances in speech. For example, I will usually say āTurn on the living room lightā, but my wife will say āTurn on the light in the living roomā, the Willow-Autocorrect will interpret these and send the correct sentence to HA.
So far it has worked flawlessly.
Anyone facing problems with OpenAi conversation? I have an OpenAI paid suscription and I get always this error :
Sorry, I had a problem talking to OpenAI: Error code: 404 - {āerrorā: {āmessageā: āThe model gpt-4o
does not exist or you do not have access to it.ā, ātypeā: āinvalid_request_errorā, āparamā: None, ācodeā: āmodel_not_foundā}}
No matter what model I use (tried, from gtp-3 to gpt-4o)
Done:
Will the AI API for Assist contain some kind of built-in āAI Prompt Generatorā or filter to convert intent and generate both effective and short text promts in order to not only give a good experince but also help keep costs down when using cloud chatbots that are paid-for per token?
That is, try make the promts that are sent into online AIs as short as possibe to save the users some money.
I know it will be an balance act as while longer promts will help provide more context for the AI but they will also cost more in token fees.
While you are mostly correct you are disregarding a couple of things. LLMās are language models, the knowledge they posses IS language.
They can not make decisions on their own but they can understand what your saying, how your saying it and what you most likely mean based on the patterns used. Saying an LLM has no knowledge is a false premise, thatās like saying the neural networks that do facial recognition donāt work because they donāt have any knowledge.
HA can feed an LLM the context of every device in your home in the instructional prompt that is sent before the user prompt, which gives it the knowledge to act upon those devices, and even make a guess on what device you were talking about if what you were asking isnāt clear.
I understand a lot of people think these things are gods for some reason, and they are wrong. But a lot of people think they are just dumb angry pixies also, and those people are just as wrong.
Every single thing you have interacted in a meaningful way for the past 15 years and probably more has been done with neural networks and machine learning(what AI was called before the normies heard about it) Every algorithm, whether its google search, ALL facial recognition, licence plate readers, behavioral analysis, pick and place machines etc has been functioning via machine learning. They are all weighted toward what they are tasked with. Its getting hot now because the industry secret cat is out of the bag, which is why google themselves said they have āno moteā
They are gaining traction now, because you can run a small pointed neural network on a potato, because of the exponential growth in computational power. See people, pets, packages, car recognition on the unifi cloud key of all things that are all done on device, an appliance actually.
Hello @mib1185 ,
I have an Aqara alarm integrated via HomeKit, and I do not need to specify any settings in my āconfiguration.yamlā file to use it. With the new HA version, I am unsure how to disable the required arm code.
How can I change this default value to false to disable the code requirement from the UI or in my automations?
Additionally, I find it odd that the code is required for arming but not for disarming, considering security reasons.
Thank you for your help.
Again, another amazing release from the HA team.
I have to say A BIG THANK YOU!! to @karwosts for developing collapsible sections in blueprints. I get asked about this all the time, and seeing it go from a feature request to being developed and now implementedā¦ nice. As always, I have tested it, and it works flawlessly. I will be rolling this out in many of my blueprints. It makes them so clean and very easy to use now. On behalf of everyone using blueprints and all the blueprint developers, THANK YOU!!
Watching the YouTube releaseā¦ JLo, you are a weapon.
Blacky
Wow, what an amazing release! Thanks so much to everyone whoās been working on this - you are all epic.
How complicated is it to add the autocorrect to home assistant
Could something like a Coral TPU enable a RPi to run a local model?
Since upgrading to 2024.6 I am experiencing many issues/outages.
I am new to Home Assistant but not networking or home automation.
So far, Govee, Shelly, Sonos, Nanoleaf are all failing, plus not sure if related but the Symfonisk Remote Blueprint is configuring but unable to see any of the buttons presses visible in Home Assistant Device page and the Automation is not showing either.
Not sure if these are related yet.
Need to figure out how to roll back to 2024.5.
Looks like I just run ha core update --version 2024.5.6 ?
the ha core update --version 2024.5.6
command will revert you but not solve the issuesā¦
have a look here How to help us help you - or How to ask a good question, post your logs, share what config and setup you have and who knows maybe the issues(s) you have can be solvedā¦youāll never know if you donāt ask
Did you miss this bit?
Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. Weāre collaborating with NVIDIA to get this working ā [they showed a prototype last week.] (https://youtu.be/aq7QS9AtwE8?si=yZilHo4uDUCAQiqN)
That appears to have been added after publication and after my comment, as this snapshot from the Wayback Machine shows. You can compare the timestamp of my comment with that of the Wayback snapshot: the passage wasnāt there when I commented.
Well, I guess they sort-of responded to your comment, then.
If you want to use Gemini in Europe, there is no free plan available. But, if you activate first Gemini free credit in a project in Google Cloud console, you will get three months of free Gemini in the Gemini studio. Then activate Gemini API through the studio console, and here you are, 3 months of Gemini credit.
First try are quite awesome, despite some issues with special characters in room names, like CheminƩe in French.
But so cool to ask Assist : What is the warmest room ? Open covers in the kitchen half way ā¦
So so cool. Sure I would prefer to have it running a local LLM but it a first approach to replace Siri in day to day use.
Good morning all. Itās a Saturday here and my weekday routines have all fired! Seems something is up with workday_sensor