Don’t give up that fast.
There are a lot of reasons why the user experience might not be like you hoped for.
Could be a local LLM that’s too dumb, or you trying to to do things that aren’t available in HA as tools for the LLM so far.
See the LLM just as helper to get away from fixed sentences.
It’s not smart enough to do everything you want on smart home side, but it’s good at interpreting context, match data and it can call tools (that you have to provide).
See my thread here and click through the linked posts in the start post and look for the example sentences / conversations to see what a LLM can do if you provide it some tools to find the correct entities, history values, music control, …
While you can do the same with fixed phrases,
you won’t be able to remember all the correct sentences after you got so far.
Or get your family to remember all these commands. ![]()
On the LLM side we already replaces Alexa here and everyone in the house loves the new possibilities it provides (after a lot work from my side).
While I use a cloud based LLM (GPT 4o mini or 5 mini), my tests with the larger gtp oss model, that you can host yourself at home are also very good (as a lot other models might be, but I’m not that into local models, as I haven’t invested in the needed hardware so far).
I guess both “problems” will fade away over time: More commodity hardware will be able to run such models in the future as hardware get better, and most needed tools will be implemented right into HA.
Also LLMs will get better in understanding the needed context and might need less specialized tools.
But for the (non-techy) end user, the LLM is a so much nicer experience.