HA OpenCode - An addon to plug opencode AI into your Home Assistant

OpenCode for Home Assistant

Hey everyone!

If anyone is interested… I am playing with creating an addon to integrate OpenCode as a tool in Home Assistant.

What is OpenCode?

OpenCode is an open source AI coding agent that helps with software engineering tasks. Think of it as having a capable developer assistant that can understand and explain code, add new features, fix bugs, and interact with your development environment. It supports 75+ LLM providers including Anthropic (Claude), OpenAI, Google, xAI, DeepSeek, local models via Ollama/LM Studio, and aggregators like OpenRouter and Together AI.

The Addon

I have gotten as far as getting the OpenCode application itself to run and it stores sessions and tokens persistently across restarts and reboots. I have also created purpose-built LSP and MCP integrations to make OpenCode more tightly integrated and well-versed in the language of Home Assistant. This gives you possibilities like:

MCP Integration

  • Natural language questions about your automations and entity landscape
  • Context-aware troubleshooting with live entity states, history, and logs
  • OpenCode can take actions in Home Assistant directly, like turning on/off lights

LSP Integration

  • Real-time YAML syntax validation and assistance
  • Entity-aware code completion based on your Home Assistant setup

EDIT: As provided feedback and authority on the subject has stated (see further down), this is likely a very bad idea!!!
I have removed the link to the addon. I will use this myself, but recommend everyone else to stay away! :smiling_face:

EDIT 2:
While I will leave the above warning in place, underlining that you as the user is the one responsible for creating a valid and working config, based on the input from @balloob, I am adding back the link to the porject. I would still tag this as experimental.
Any contributions to making this tool better, more solid and safer to use inside Home Assistant are very welcome.
Use the tool, don’t use the tool, test the tool… I’ll leave it up to you guys…

Enjoy: GitHub - magnusoverli/ha_opencode

5 Likes

Are you using AI to create AI?
Asking for a friend…

To a large extent, yes! :slight_smile:

How do you prevent the AIs from using deprecated functions or incompatible code?

In HA deprecations can happen from day to day, due to HA not having control of third party APIs, and AIs often mix other code sources in to HA, that are not compatible, like Ansible.

1 Like

This is a good question. I have not experienced issues like that, but that is not to say that it could not be a problem. Home Assistant, as you say, develops rapidly and LLMs have a knowledge cutoff.

Luckily, Home Assistant has become very good at warning of deprecating changes well ahead of time. That helps.

Triggered by your question, I made some changes to the AGENT instructions and created some new MCP tools in an attempt to handle this better.

Changes made:

  1. New MCP tools for documentation awareness:

    • get_integration_docs - Fetches live documentation from the Home Assistant website before writing configuration
    • get_breaking_changes - Checks for breaking changes that may affect your HA version
    • check_config_syntax - Validates YAML for deprecated patterns (like the old platform: template syntax)
  2. Updated LLM instructions to follow a “check docs first” workflow - the LLM is now guided to fetch current documentation and verify syntax before suggesting any configuration changes
    The goal is to have the LLM verify its suggestions against current documentation rather than relying solely on training data that may be outdated.

I have pushed version 1.0.12 now, that includes these changes. and some other MCP bugfixes.

Only for the deprecations that are decided in-house.
Third-party deprecations occur from day to day at times.

The problem with AI is that it needs to pull on knowledge from many sources to be able to work its AI, but it can not be allowed to do that with HA, because the other sources are not compatible with HA.
Forcing the AI to ONLY! use HA documentation as a source makes it extremely dumb and it can then not work. Allowing it to pull from other sources means error in syntax, context and results.

1 Like

I think you have to differ between the knowledge pooled into the model training and the documentations and other sources you tweak it to use when working on tasks.
While I see the point, I don’t think the problem you’re referring to is a (big) problem in reality. Not for “simple” things like HA config and automations. That is structurally (syntax/language) quite simple and is part of the training data. This is also quite generic and not specific to HA.

But please take the addon for a spin and test and see what you feel after trying it out. Opencode comes (for now at least) with a free tier model you can use to test. It is not as good as the Anthropic Sonnet/Opus models I am using, but it gives you a glimpse into what can be done. :slight_smile:

My day job is AI. I sell it I research it I use seven different models on a daily basis and write code for HA all day every day.

Let me stop you and everyone else who reads this here. Yes Full stop. This is dead wrong. And everyone needs to understand this point. It’s exactly why HA coding with llm is so bad and will continue to get worse.

In fact it is is the single largest problem with code for HA with an LLM.

This is the reason your moderators are so damn hard on everyone for posting AI content.

So while I applaud your enthusiasm. Won’t use. Because right now no code bot including Claude Code does HA jinja correctly without about three thousand lines of corrections to misunderstanding of Jinja2 v. Python v. HAs version of Jinja.

I continue to stand on the fact that HA code with an LLM will not improve until someone makes a specific HA type benchmark score the vendors can publish and tune for… DO THAT they solve the problem for you to chase the number. :slight_smile:

That is the problem! It is part of the training data and not the ONLY training data.
Ansible and HA both use Jinja2, but they are not compatible and the AI can not learn this difference, because the sites do often not state which version it’s code examples are related to.
Even if you tell the AI to not use Ansible, then it will still do it, because it does not know it do it.

You can make a perfect addon, but it will still fail, because the training of the AI is the flaw.

1 Like

Yes. If you build it as a black box. The black box needs to correct the misunderstanding.

If the black box has no corrections for the issues stated. (and Wally is 100% correct here) it’s doomed to fail.

In fact any AI trained before 1Jan 2026 I bet blows up in six months when we drop classic style template format from the deprication. It’s gonna get wild when everyones ai generates bad templates by default…

Hi guys!
Thanks for the feedback!

As I have stated, I am not an expert, far from it actually!
I trust every word you say, and in light of that I will keep using this myself, but remove links in this thread and edit the previous posts.

As I have tested, I have seen no evidence of the issues you point to, still, I’d like to not be the reason someone ends up with heaps of issues!

I will leave the thread, for future reference, but that’s it.

No one should use this! :smiling_face:

I love your enthusiasm and the effort, so I hope your u find something else to contribute with.
The idea here was also noble, but the building blocks you have available are just not up for it andany of the providers of the AI services are not ready to use the human resources needed to sort this out.

I disagree with the sentiment in this topic. AI and Home Assistant are great. Config validation will catch invalid templates and things like ha-mcp can validate it before commiting it. I would recommend adding back the link. Let the community decide if they want to use it. I definitely would.

4 Likes

Both AI and HomeAssistant are rapidly evolving. People confusing AI and LLM is what is going to cause the most grief.
If you wield AI as a tool like @balloob is suggesting, correctly focused with relevant contributing knowledge, then the latest tools are going to be of help. If you lazily or naively repost AI slop from LLMs that just web scrape user forums and posts on social media, documenting their problems but not the solutions, the LLMs will faithfully process the data, and regurgitate it carefully formatted with correct language constructs, as this is what they are designed to do, but the results will be outright wrong, as we see daily.

People confusing the two blithely trust the slop and the forums here end up going down the (now LLM assisted) gurgler, faster and faster.

Yes, people are waking up, realising the difference, rejecting the LLM slop and embracing the functionality of AI to provide enhanced capabilities, but not complete replacement. I think that time may come, but it is not here yet. The final signoff still remains something you need to do, to take responsibility for the code you contribute.

Yes, we are getting the quantity, where every search produces volumes of reading material, but what we desperately need is quality, where the results are useful. The correction in the AI industry will be vast once that is realised - it is not far off, and will be brutal. The pushback is already strong.

‘Showing your working’ where you document your AI interaction, the ‘chat’, your sources, the training model, as a normal and added part of your code release so that another person can replicate your results may become the norm, before people will start to blindly trust openly acknowledged AI sourced code.

Should we be demanding that? Getting people to think a little more deeply as they now have to disclose their sources publicly?

1 Like

That part is the key point!
HA is not just expanding with new features and functions, but it also deprecate, rename and change many features and functions, and so do third parties too.

The information in the documentation on HA’s own website is simply not enough to make it useful and the information from other sources can and will go stale, so it will be a constant work to keep finding new and updated knowledge and also to remove obsolete and wrong one.
The more knowledge you have the better such an AI will be, but the bigger a task it will be to keep its contributing knowledge up to date and the amount needed for making it useful in the first place is in my opinion big enough that a single person or small group will not be able to do it.

OpenCode can just call config validation, read error and iterate until it passes. Your HA will never fail to start. Yeah automations can trigger incorrectly but that’s same if you write it yourself. Test and validation in real world remains necessary.

2 Likes

It Can. B… And I’ll be suprised as hell when people actually do…

Sorry. Not letting an LLM have unfettered access to my config. (not running openclaw in prod and not running with scissors either) it’s more rigid than the windows registry and one wrong move you don’t boot. I won’t let an LLM in a registry either…

Veronica, my ‘dev assistant’ says this is why she gets stuff wrong…

Too fragile. Combined with too much incorrect data trained inthe models… Deploy parallel and move the config MAYBE. but because of the fragility here and how fast I’ve seen LLMs creatively trash that file. Nope. We’re not anywhere close.

B what would be cool though because I do agree eventually we will… What it will take is a CONCERTED training effort to get the big four to train a specific set of rules (the big but HA does it this way sandbox stuff that LLMs consistently get wrong) and a benchmark that says how well an agent drives HA. You convince them to get that benchmark and leader board it all the arenas will start competing better HA automation.

Ive mentioned it to some folk at OAI and Anthropic. it’ll probably have to start inthe community because we know what does and doesn’t work… Get a verifiable benchmark with a verifiable score that tests actual real world perf and known HA coding errors. (such as old style template, mutate a var inside a loop with Java constructs… Etc.) and then you’ll start to see real traction. Pair it with official markdown skills for HA Jinja and “do and do not” of HA. Until that, guys like me who do this for a living will keep warning people it’s a bad idea. Give it good official instructions however… With guardrails…

Edit:
Because I don’t want to shoot off my mouth without ALSO offering a solution: Friday's Party: Creating a Private, Agentic AI using Voice Assistant tools - #315 by NathanCu

True and it will then be stupid as a brick and of no use.

Based on the feedback, I have added back the link. The tool has evolved since the first post. I am more than happy for any contributions making the tool even better and safer to use in Home Assistant… :slight_smile:

3 Likes

FYI, Home Assistent founder @balloob did a live demo of this OpenCode AI app/addon for Home Assistent during yesterday’s Home Assistant 2026.3 release party video. Check out it out starting at around 1:43:04 time here → https://www.youtube.com/live/AQOzKdvO97s?t=6183&is=3At0YULBOtS9j_Th

He there also showed/demoed this OpenCode app using a new experimental CLI utility called Home Assistant Builder (hab) that he and @magnus.overli are working on which could be use to complement MCP to make AI more reliable with some tasks as it is designed for LLMs to help them build and manage Home Assistant and ESPHome configurations → GitHub - balloob/home-assistant-build-cli · GitHub