2024.6: Dipping our toes in the world of AI using LLMs 🤖

Roborock Integration: The card/map/image is not available anymore

Is there a time code/stamp for Nvidia discussion?

that being said, there was already a stream from some of the team members on how to setup a local llm …

BUT: it requires some hardware resources, that are not working on a Py or if you run HomeAssistant on your Synology NAS.

In my opinion - it was requested a lot, that VoiceAssist / Assist should be “Alexa like” - so the immediate solution for that is: “Use, what’s already existing” - and then, build up on that …

Before NabuCasa should put effort in such a technology - and create their own LLM or whatever, let’s see how this will work with already available systems where a lot of effort was already put into.

So - I see this as a “proof of concept” as of now

3 Likes

yes it is… no problem on my side

thanks for confirming this.
I’ve noticed this behave during one of the last betas, when the visibility was put into the UI…

At some point, I think it was working with nested cards… but at that point you could only configure this option in yaml.

So I was not able to figure out, if I made a mistake or if it was a new issue

Same thing happens to me. I am currently using Homebridge as a workaround but it would be nice if the Aladdin integration actually worked again. Hopefully the next update in June will fix this???

And what happens if the water turns out too cold :rofl:

When you use it in homebridge can you then pull into Home Assistant? Or are you just using it in Home Kit?

Strange. It said that the integration doesn’t provide this anymore.

After reading your post, I restarted HA and it was there again. :thinking:

You don’t need a tremendous amount of power to run some quite good local models now. True a standard rPi won’t be enough, but the 3B and 7B parameter models run acceptably fast on intel i5s with iGPUs, or even CPU only. I’ve gotten decent performance out of a $50 Quadro P620 card in my $400 HPE server as well.

There is a lot of work being done to improve the compute efficiency for small models - instead of just “what is the biggest model I can run” the question is more “what is the smallest model that I can fine tune for this task”? We don’t need the local model to answer questions about outer space, roleplay a job interview, or write fiction, so the model can be a lot smaller. I suspect a well tuned model for home assistant local control could be 1.5-4B parameters and do everything you need it to do.

You might try installing Ollama on your machine or laptop now, if it supports it, and try out a few small models like llama3, phi3, qwen or gemma, all of which have variants in the 2B to 8B parameter ranges that take very little power to run, and often get surprisingly good results.

1 Like

Awesome release! I have been testing Assist with OpenAI and it works very well. Control seems to work only with intent, is there a plan to also include trigger sentences?

I am trying to play around a bit with the new LLM control stuff, but it seems I need to expose each entity to Assist 1 by 1 from a seemingly(?) unordered list.

For example I wanted to expose all lights in the livingroom to start with, but that was a lot of searching entities and having another tab open with the entites view with them grouped by area and filtered by domain to figure out the entity ids that were in the livingroom.

Is there an easier way to expose entities to the Assist that I am missing?

This release crashes continuously on RPi4 8gb.
There is no check before release on this RPi4 board?

Of cause a LLM doesn’t run on a pi, but it can be setup on a (powerful) local machine and be used by HA. It would be nice if we have this option.

Please open a GitHub issue with the full log. Without being able to see the full log, it’s not possible to give you a good recommendation on how to proceed.

1 Like

Please open a GitHub issue with debug logs for mqtt turned on at startup.

Is there a way to remove assist ?
I juist use Google assistant together with ha.
It’s enough for me. Don’t need all the bulky stuff. It just slows down ha. For me HA is about automating, not talking all the time to ha.

1 Like

If you are using container, supervisor, or HAOS, you don’t need to enable isal as all of the docker images are prebuilt with it installed. If it’s installed it will automatically be used.

It’s only core/venv install types that would benefit from having it manually added to configuration.yaml

1 Like

I don’t use assist for many of my installs. If it’s not configured it’s already well optimized to avoid the expensive dependencies.

It’s probably this issue Current Docker RC tag won't boot on RaspberryPi 4 8Gb, Fatal Python error: Bus error · Issue #118507 · home-assistant/core · GitHub

Unfortunately we didn’t get enough information until after the release to confirm the source of the problem so it didn’t get fixed until 2024.6.1 (not yet released)