Roborock Integration: The card/map/image is not available anymore
Is there a time code/stamp for Nvidia discussion?
that being said, there was already a stream from some of the team members on how to setup a local llm âŚ
BUT: it requires some hardware resources, that are not working on a Py or if you run HomeAssistant on your Synology NAS.
In my opinion - it was requested a lot, that VoiceAssist / Assist should be âAlexa likeâ - so the immediate solution for that is: âUse, whatâs already existingâ - and then, build up on that âŚ
Before NabuCasa should put effort in such a technology - and create their own LLM or whatever, letâs see how this will work with already available systems where a lot of effort was already put into.
So - I see this as a âproof of conceptâ as of now
yes it is⌠no problem on my side
thanks for confirming this.
Iâve noticed this behave during one of the last betas, when the visibility was put into the UIâŚ
At some point, I think it was working with nested cards⌠but at that point you could only configure this option in yaml.
So I was not able to figure out, if I made a mistake or if it was a new issue
Same thing happens to me. I am currently using Homebridge as a workaround but it would be nice if the Aladdin integration actually worked again. Hopefully the next update in June will fix this???
And what happens if the water turns out too cold
When you use it in homebridge can you then pull into Home Assistant? Or are you just using it in Home Kit?
Strange. It said that the integration doesnât provide this anymore.
After reading your post, I restarted HA and it was there again.
You donât need a tremendous amount of power to run some quite good local models now. True a standard rPi wonât be enough, but the 3B and 7B parameter models run acceptably fast on intel i5s with iGPUs, or even CPU only. Iâve gotten decent performance out of a $50 Quadro P620 card in my $400 HPE server as well.
There is a lot of work being done to improve the compute efficiency for small models - instead of just âwhat is the biggest model I can runâ the question is more âwhat is the smallest model that I can fine tune for this taskâ? We donât need the local model to answer questions about outer space, roleplay a job interview, or write fiction, so the model can be a lot smaller. I suspect a well tuned model for home assistant local control could be 1.5-4B parameters and do everything you need it to do.
You might try installing Ollama on your machine or laptop now, if it supports it, and try out a few small models like llama3, phi3, qwen or gemma, all of which have variants in the 2B to 8B parameter ranges that take very little power to run, and often get surprisingly good results.
Awesome release! I have been testing Assist with OpenAI and it works very well. Control seems to work only with intent, is there a plan to also include trigger sentences?
I am trying to play around a bit with the new LLM control stuff, but it seems I need to expose each entity to Assist 1 by 1 from a seemingly(?) unordered list.
For example I wanted to expose all lights in the livingroom to start with, but that was a lot of searching entities and having another tab open with the entites view with them grouped by area and filtered by domain to figure out the entity ids that were in the livingroom.
Is there an easier way to expose entities to the Assist that I am missing?
This release crashes continuously on RPi4 8gb.
There is no check before release on this RPi4 board?
Of cause a LLM doesnât run on a pi, but it can be setup on a (powerful) local machine and be used by HA. It would be nice if we have this option.
Please open a GitHub issue with the full log. Without being able to see the full log, itâs not possible to give you a good recommendation on how to proceed.
Please open a GitHub issue with debug logs for mqtt turned on at startup.
Is there a way to remove assist ?
I juist use Google assistant together with ha.
Itâs enough for me. Donât need all the bulky stuff. It just slows down ha. For me HA is about automating, not talking all the time to ha.
If you are using container, supervisor, or HAOS, you donât need to enable isal as all of the docker images are prebuilt with it installed. If itâs installed it will automatically be used.
Itâs only core/venv install types that would benefit from having it manually added to configuration.yaml
I donât use assist for many of my installs. If itâs not configured itâs already well optimized to avoid the expensive dependencies.
Itâs probably this issue Current Docker RC tag won't boot on RaspberryPi 4 8Gb, Fatal Python error: Bus error ¡ Issue #118507 ¡ home-assistant/core ¡ GitHub
Unfortunately we didnât get enough information until after the release to confirm the source of the problem so it didnât get fixed until 2024.6.1 (not yet released)