TLDR: I’ve put together a pretty serviceable Anthropic skill hosted on my github that I use for vibe coding. It standardizes my design principles, codifies the things that work and helps get better code out of an LLM. It’s tailored to my way of structuring things, and it has decreased the time it takes me to get better-quality code.
I’ve been using HA for about 3 years but until about a year ago was limited to what I could build in the GUI.
My limited abilities frustrated me and my desire for more complex functionality. I turned to ChatGPT and later Claude. I started by tinkering around initially with just the logic of how to structure more complex automations in the GUI and then increasingly turned to creating snippets of yaml/jinja.
I quickly realized that LLMs got a lot right in concept and wrong in implementation, sometimes horribly wrong. I started documenting the errors and the prompts that I used to work around them. I also collected snippets of yaml and jinja from the LLM output and online forums that worked for what I was trying to do. I learned so much about HA under the hood along the way.
The more I used AI to vibe code, the more I documented. When skill functionality was released a few months ago, i converted a bunch of documents into the earliest version of this skill and have kept developing it the more I’ve used it.
I know using AI can be a very contentious topic in these forums, but for a non-coder who wanted a whole lot more out of HA this skill has helped me immensely. I now have much of the complex functionality that I wanted and am more confident in the quality of the code in my setup.
It is a great thing that you have progress but have you learned also from it. Do you understand the yaml now, what it does and where it went wrong because that should be the main goal of this.
Keeping your LLM up to date will be your biggest challenge. Teach it based on actual vendor documentation, not forums such as these that have bad examples riddled with errors, and often omit the correct solution. The problems often encountered by posters where they have been offered plainly wrong advice is embarrassingly frequent, frustratingly so.
Point your LLM at release notes, and monitor for updates to documentation as primary resources.
Given the fast paced advances and changes in HomeAssistant, daily update refreshes may still leave you struggling.
Never fall for the general confusion the AI industry has sown that LLM = AI. Yes it has raised billions in startup money, but a LLM that slurps up rubbish and presents it back to you carefully reformatted and spell checked is a far cry from an AI engine that analyses many sources, noting their age and relevance, makes value judgement and offers advice based on an ever expanding knowledge base.
Use it as a tool to assist, not a shortcut because you are lazy and simplistically trusting that it is always right. We learned with Google search to never click on the first link offered, often the hard way. The same thing is happening with the AI industry, and the bubble of breathless euphoria will burst and be replaced by a healthy dose of skepticism as you realise there is no Holy Grail, no pot of gold at the end of the rainbow, just the never ending hint that it might exist, as long as you keep handing over lots of money.