This is not a post about integrating LLMs into Home Assistant. It’s about using AI to design and build a HA instance.
Before everyone jumps on me, let me say that this is NOT A GOOD IDEA. If you’re not an AI professional with years of experience with HA, you’ll end up with something that doesn’t work which you don’t understand.
Nevertheless, it seems clear from posts in the Forum and from the direction the world is drifting in that people are going to use AI regardless. It may be time to stop sniping and start encouraging good practice and greater understanding. I’m making this post a wiki so that it can be edited, and I invite anyone with an interest in the subject to contribute (constructively?) to the thread.
Do not use AI to create or respond to posts. This is a breach of forum rules and your post will be deleted.
Just thought I’d mention that. ![]()
Anyway. To start things off.
TL;DR If you’re looking for an AI to help with building HA, what “style” should you be looking for? Does it make any difference?
I use an LLM to analyze and document my system, which after five or six years of tinkering is quite chaotic. This is a custom gpt - a local add-on which can read (but not write to) my config and take into account areas, labels, entity notes, descriptions in automations/scripts and comments in templates. It can also access a system journal where I make notes about changes and it has an extensive prompt describing preferred practices. The point is that it is quite difficult to document your own system. The notes you can add are scattered all over the place and difficult to track. A custom AI can bring them together and summarise them.
I’ve been using the gpt-5 API, but this morning I got an email from OpenAI inviting me to try 5.1. This apparently has the new “reasoning_effort = ‘none’ mode” (whatever that means). As usual there are several flavours:
- gpt-5.1: for everyday coding tasks
- gpt-5.1-codex: for complex, long-running agentic coding
- gpt-5.1-codex-mini: for cost-efficient edits and changes
The difference between 5.1 and 5.1-codex-mini seems to be that 5.1 is much more “talky”. It will attempt to explain concepts and offer pros and cons, while codex-mini just gets on with it - it will jump straight to yaml with fewer explanations. They both produce the same results. With the same errors.
My question (if you’re still reading) is: which is better? Most people who are tempted to use AI to build Home Assistant will probably not be using the API, but different styles will still be aparent in ChatGPT, Grok and the rest, particularly if they’re using the prompt effectively. So - given that people are going to use AI anyway whether we like it or not - should we be nudging them towards services that offer explanations or ones that just produce the code?
On the face of it, you’d think that the “talky” style would be better for someone who was learning, but it can be misleading. Ask for an automation to turn the light off when no motion has been detected for 10 minutes and an AI is likely to refer to a “timer” in the trigger - which feeds into all sorts of confusion about triggers and conditions. So (setting aside for the moment the question of whether it’s correct or not) maybe just a chunk of yaml would be better?
Again, people are going to use AI whether “we” on the Forum like it or not. There must be a better way of dealing with it than shouting at them. No doubt AI will improve, but for the time being it’s not nearly as good as people think it is. It’s time to start thinking about making the users better.