HA OpenCode - An addon to plug opencode AI into your Home Assistant

OpenCode for Home Assistant

Hey everyone!

If anyone is interested… I am playing with creating an addon to integrate OpenCode as a tool in Home Assistant.

What is OpenCode?

OpenCode is an open source AI coding agent that helps with software engineering tasks. Think of it as having a capable developer assistant that can understand and explain code, add new features, fix bugs, and interact with your development environment. It supports 75+ LLM providers including Anthropic (Claude), OpenAI, Google, xAI, DeepSeek, local models via Ollama/LM Studio, and aggregators like OpenRouter and Together AI.

The Addon

I have gotten as far as getting the OpenCode application itself to run and it stores sessions and tokens persistently across restarts and reboots. I have also created purpose-built LSP and MCP integrations to make OpenCode more tightly integrated and well-versed in the language of Home Assistant. This gives you possibilities like:

MCP Integration

  • Natural language questions about your automations and entity landscape
  • Context-aware troubleshooting with live entity states, history, and logs
  • OpenCode can take actions in Home Assistant directly, like turning on/off lights

LSP Integration

  • Real-time YAML syntax validation and assistance
  • Entity-aware code completion based on your Home Assistant setup

EDIT: As provided feedback and authority on the subject has stated (see further down), this is likely a very bad idea!!!
I have removed the link to the addon. I will use this myself, but recommend everyone else to stay away! :smiling_face:

1 Like

Are you using AI to create AI?
Asking for a friend…

To a large extent, yes! :slight_smile:

How do you prevent the AIs from using deprecated functions or incompatible code?

In HA deprecations can happen from day to day, due to HA not having control of third party APIs, and AIs often mix other code sources in to HA, that are not compatible, like Ansible.

This is a good question. I have not experienced issues like that, but that is not to say that it could not be a problem. Home Assistant, as you say, develops rapidly and LLMs have a knowledge cutoff.

Luckily, Home Assistant has become very good at warning of deprecating changes well ahead of time. That helps.

Triggered by your question, I made some changes to the AGENT instructions and created some new MCP tools in an attempt to handle this better.

Changes made:

  1. New MCP tools for documentation awareness:

    • get_integration_docs - Fetches live documentation from the Home Assistant website before writing configuration
    • get_breaking_changes - Checks for breaking changes that may affect your HA version
    • check_config_syntax - Validates YAML for deprecated patterns (like the old platform: template syntax)
  2. Updated LLM instructions to follow a “check docs first” workflow - the LLM is now guided to fetch current documentation and verify syntax before suggesting any configuration changes
    The goal is to have the LLM verify its suggestions against current documentation rather than relying solely on training data that may be outdated.

I have pushed version 1.0.12 now, that includes these changes. and some other MCP bugfixes.

Only for the deprecations that are decided in-house.
Third-party deprecations occur from day to day at times.

The problem with AI is that it needs to pull on knowledge from many sources to be able to work its AI, but it can not be allowed to do that with HA, because the other sources are not compatible with HA.
Forcing the AI to ONLY! use HA documentation as a source makes it extremely dumb and it can then not work. Allowing it to pull from other sources means error in syntax, context and results.

1 Like

I think you have to differ between the knowledge pooled into the model training and the documentations and other sources you tweak it to use when working on tasks.
While I see the point, I don’t think the problem you’re referring to is a (big) problem in reality. Not for “simple” things like HA config and automations. That is structurally (syntax/language) quite simple and is part of the training data. This is also quite generic and not specific to HA.

But please take the addon for a spin and test and see what you feel after trying it out. Opencode comes (for now at least) with a free tier model you can use to test. It is not as good as the Anthropic Sonnet/Opus models I am using, but it gives you a glimpse into what can be done. :slight_smile:

My day job is AI. I sell it I research it I use seven different models on a daily basis and write code for HA all day every day.

Let me stop you and everyone else who reads this here. Yes Full stop. This is dead wrong. And everyone needs to understand this point. It’s exactly why HA coding with llm is so bad and will continue to get worse.

In fact it is is the single largest problem with code for HA with an LLM.

This is the reason your moderators are so damn hard on everyone for posting AI content.

So while I applaud your enthusiasm. Won’t use. Because right now no code bot including Claude Code does HA jinja correctly without about three thousand lines of corrections to misunderstanding of Jinja2 v. Python v. HAs version of Jinja.

I continue to stand on the fact that HA code with an LLM will not improve until someone makes a specific HA type benchmark score the vendors can publish and tune for… DO THAT they solve the problem for you to chase the number. :slight_smile:

That is the problem! It is part of the training data and not the ONLY training data.
Ansible and HA both use Jinja2, but they are not compatible and the AI can not learn this difference, because the sites do often not state which version it’s code examples are related to.
Even if you tell the AI to not use Ansible, then it will still do it, because it does not know it do it.

You can make a perfect addon, but it will still fail, because the training of the AI is the flaw.

1 Like

Yes. If you build it as a black box. The black box needs to correct the misunderstanding.

If the black box has no corrections for the issues stated. (and Wally is 100% correct here) it’s doomed to fail.

In fact any AI trained before 1Jan 2026 I bet blows up in six months when we drop classic style template format from the deprication. It’s gonna get wild when everyones ai generates bad templates by default…

Hi guys!
Thanks for the feedback!

As I have stated, I am not an expert, far from it actually!
I trust every word you say, and in light of that I will keep using this myself, but remove links in this thread and edit the previous posts.

As I have tested, I have seen no evidence of the issues you point to, still, I’d like to not be the reason someone ends up with heaps of issues!

I will leave the thread, for future reference, but that’s it.

No one should use this! :smiling_face:

I love your enthusiasm and the effort, so I hope your u find something else to contribute with.
The idea here was also noble, but the building blocks you have available are just not up for it andany of the providers of the AI services are not ready to use the human resources needed to sort this out.