Warning - Caveat when using AI to generate HASS code

I started using AI to help me generate scripts and automations code recently (Gemini and Claude in particular), howewer, I came across an odd behaviour of HASS automation code generated by AI recently (my guess is that it may have something to do with changes made in core 2025.10.X. )

The AIs that I’ve tried thus far are hell bent on using older code syntax sometimes and, in particular, using “service:” instead of “action”. To my knowledge, this was not causing much issues under 2025.9.x, but under 2025.10.x, this wreaked havoc and was quite hard to debug.

The reason is that when you save some code using service: instead of action: in HASS. This snippet for example (for testing):

alias: Task Manager - Daily Check All Tasks
id: task_manager_daily_check_all_tasks
description: Check all tasks daily at 9:00 AM and send notifications for due tasks
trigger:
  - platform: time
    at: "09:00:00"
condition: []
action:
  - repeat:
      count: 10
      sequence:
        - service: notify.mobile_app_iphone
          data:
            title: "Task Check"
            message: "Checking task {{ repeat.index }}"
mode: single

You get an error message indicating the automation was saving but that it could not be set up (without any clue as to why). Then, whether you choose to delete it from the three dots menu or not, the automation remains in the automations.yaml file, and does not show up in the automations UI.

Initially, I didn’t know that and I kept trying to fix the same automation over and over again using the same ID… and the failed attempts kept piling up in the automations file unknown to me (still with nothing showing up in the UI). Then, I started receiving duplicate ID errors…

After that, upon restarting, the automations did appear in the UI but were listed as “unavailable”.

It may seem benign but it took me a long time to figure all that out coz my original automation code was quite lengthy and I had no clue what was causing HASS to fail in setting it up.

The takeways from that are:

  1. You should be very careful when using AI generated code and make sure you review it thoroughly before trying to save it.
  2. HASS should update its code validating algorithms to filter out errors like that and/or produce more meaningful errors messages. And, by the way, that’s not the only code syntax error that will still allow you to save your automation but fail at having HASS setting it up.
2 Likes

Try Always there instead.

They are trained on old code. The best you can hope for is 6 months out of date…
If you want to have them reference new code, you have to specifically give the LLM sources to exclusively pull from, and your request will be several pages long.
Bottom line, don’t use LLM’s to write your YAML on something like Home Assistant that changes the rules every month. It will always give you old broken code…

@nanomonster
I suggest documenting that in an issue so someone can duplicate it and fix it…

1 Like

@Sir_Goodenough Thanks for the links! Maybe I will post the last part as an issue as you suggest…

As for using AI to help with writing code, I find it’s a tremendous help (in spite of the quirks, caveats and dated syntax…) I did write a lot of code with the help of AI in a short amount of time, even though I’m basically new to yaml and python, and the amount of time it saved me, as opposed to writing the whole thing from scratch is simply staggering.

So my advice is do yourself a favor and USE IT it’s a tremendous help!, but do keep your eye on the prize and be aware of the tool’s limitations (which is why I posted here in the first place…)

My advice is train yourself in whatever manner you like, but actually understand what you are doing.

No one helping here likes debugging AI generated code. It’s usually easy to spot, and terrible to debug because the defects look good language-wise it ‘looks’ like it should work, but often don’t. Your example, for example…

It is also against the rules here to help others with AI generated code, for that exact reason.
So bottom line if the AI helps you, great. Just be sure you know what the heck it’s doing so you can fix it later. MUCH better idea to write something yourself then ask the AI about your code, giving strict instructions what to use as a source as mentioned in my references above.

In the end best practice, what I use, is write something you are not sure of in the HA UI editor. Then go into automation.yaml, pull it out to another location, and finish the edit with code server addon… (or leave it in the UI editor space, your call)

1 Like

If I use AI for any code, I always always always tell it to “review Home Assistant’s Official Documentation” first and again at the end to verify.

Well it’s easy. My goal is not to be a programmer, it is to create automations and scripts that work for me. As for training, I train myself using AI: simple as that. Here’s my method in a nutshell.

  1. I start a conversation with AI by asking for help with an automation concept. It can be very general or it can be defined in a work frame with a context and constraints. At this point, the AI usually makes several conceptual suggestions and I keep the conversation going either by picking one option or asking the AI to tweak the options in one way or another.
  2. Once my mind is set on an implementation method, the AI will generate some code or pseudo-code actually doing something that I can review.
  3. Upon reviewing and understanding the method, I will suggest changes, modifications, tweaks, whatever to better accomplish what I set out do and the AI will implement these changes and often it will make suggestions on how to improve it. Note that, at this point I haven’t entered any code yet in HASS.
  4. Once I’m satisfied that the code is viable, I review it to catch “errors” that I know now from experience are introduced by AI in its code such as: using service instead of action, ill-defining triggers and such.
  5. At this point, I start implementing the concept in HASS such as adding configuration file entries, defining helpers, creating automations and scripts, etc.
  6. I start testing and debugging it going back and forth with the AI on a number of code revisions until I get something that works like its intended and does not cause significant issues.
  7. After that, I keep it sandboxed for a while and use it on a regular basis to find out if any bugs are popping up. If I do find bugs, I go back to my conversation with the AI (which is saved) and I keep refining until I’m happy with the result and that’s the end of the process…

Each time, I learn more about yaml, python, jinja and what works in HASS and what doesn’t

Mind you, I said I’m new to yaml and python but I’m not new to programming. I used a lot of programming languages in my work and hobbies including some microprocessors assembly language, PASCAL, FORTRAN, BASIC, C++, EXCEL macros, ARDUINO, etc. (which might give you an idea of my age…:stuck_out_tongue: )

2 Likes

Ask your AI if it will follow up links in the documentation you point it to. Mine said:

If you give me a URL, I can retrieve and analyze the content found at that specific web page. However, I cannot automatically “follow” or extract information from links that appear on that page or explore beyond the original URL unless you explicitly provide additional URLs for me to examine. This means I do not crawl linked pages or recursively fetch information from every link present on the website you provide; I can only analyze the directly supplied page’s content.If you want information from multiple linked pages, you need to provide each specific URL you wish to be analyzed.

As @Sir_Goodenough says, prompts need to be pages long. And you have to know the documentation in order to tell it where to look. But I love that it can use a semicolon properly. :grin:

There’s an interesting article here about the effect use of AI has on one’s brain:

Apparently students who use AI to write essays find it much harder to remember what they’ve written afterwards… :grin:

3 Likes

Unless you are a developer, there is no programming in HA and no programmers.
There’s people (with whatever help they like to use) using HA-Jinja and HA-YAML scripting to tell the backend programs what to do.

I don’t see any problem with your approach.

It’s the blindly believing the AI is right people or the people complaining that HA has a bug because their AI generated code no workie that we need to convince to use their own brain a bit that are the problem.

3 Likes

I laughed when I read this earlier this week…

Anthropic’s own research shows that as few as 250 samples can irrevocably poison even relatively large models and there are literally tens of thousands of “poison” HA YAML and Jinja samples on this forum alone. Add to that what can be found on Reddit, Facebook, and all the old blog posts and tutorials.

4 Likes

This is THE SINGLE BIGGEST ISSUE with HA yaml and LLMs right now.

Theres simply more bad examples than good. Yeah, what’s the saying, even a blind squirrel finds a nut now and then? Thays where the LLMs are. If it got it m good for you! Don’t expect it to keep going.

Theres someones thread somewhere where they gave an LLM access to thier config directory and plan to let it write code for them.

Me: hope you got a good unencrypted backup there… Watch what happens the first time it blows a template sensor in your config…

Will it get there. Eventually yes. There will be a tipping point where models start validating and checking themselves. Right now isn’t it. We’re more in smart middle school student territory. They kinda follow instructions if you’re clear and concise and have guardrails and redirect often.

Ill let me LLM debug and unit test for me.

Author - HA! No all that is my own crappy code thanks.

Also as a reminder, you are free to use LLMs to assist your own configuration but do not use it to help others or you will be banned.

I understand where this rule comes from but I can’t say I approve of it. The rationale of this rule is that you can ban a user using AI to give advice (regardless of whether it’s good or bad… and without considering whether the user may have reviewed and tested the AI input himself?

Then, you mention that the justification of that is that AI sometimes gives bad advice…

… but that’s exactly what regular non-AI users do: Sometimes they give bad advice, but it’s no grounds for them to be banned is it?

So it looks to me like there’s a fear or aversion against AI and if that’s the case, I find that a bit deplorable.

As far as I’m concerned, banning a member for trying to help another member with a problem is nonsense regardless of how you look at it…

What you are neglecting to take into account is the frequency of incorrect answers. Regular users who donate their time to help others got absolutely overwhelmed by LLM bullshit. So the rule was introduced.

As the explanation says, if LLMs ever improve to the point where they are not constantly and confidently bushitting users then the rule may change.

However currently LLMs are producing complete garbage (due to simplistic prompts) that is only going to poison future training. So a self reinforcing loop of bullshit.

Fortunately your opinion carries little weight in this matter.

4 Likes

Actually, if the LLMs ever get to that point, this forum will become unnecessary.

For the record, I’m not holding my breath.

2 Likes

A recent study (take it as you wish) apparently showed that experienced developers / coders took 19% longer on tasks when using AI.

This, apparently, is caused by the (experienced) developers having to spend extra time correcting the code suggested by the AI tool.

Of interest is that, the coders thought that they were 20% more productive, yet when presented with the findings, still insisted that they were more productive when using AI, even though they were not.

I am inclined to assume that the same goes for HA users, who think that correcting AI generated code / configuration is “better” than learning how to do it right in the first place.

3 Likes

I think most of the HA users are not coding much and therefore don’t bother to learn it. And in my case, even if I learn it, I forgot how to do it after a few months. So If I want to code something difficult I ask AI and most of the time something comes out that I can use after some adjustments. But I don’t place AI answers here :sunglasses:

2 Likes

The first part of your answer was respectful and I respect your opinion even though mine differs.

This part was not, especially so, for a moderator

2 Likes

I’m 100% confident that this is true as it relates to experienced (professional probably) coders who are probably long set in their way of coding and “hopefully” uphold higher standards than hobbyists like me.

I’m pretty sure though that you will not find any study demonstrating that AI slows down hobby-type coders like me…

Good point, and one that gets overlooked in most threads like this. Most people just want the lights to go on and off.

Add to that the fact that a great many (most?) people use the UI and only need yaml when they post here, and you’ve got a very odd training set for AI. I almost feel sorry for it. :grin:

Nope it’s totally because it takes longer to fix bs. And when your boss is demanding you use the system they paid for you use it.

No I will just demonstrate that it leads laypeople down a bad path instead.

Look I’m not anti AI at all. I fact it’s my day job. I’m very VERY against people thinking it’s the end all be all. It is a guide. It is to be treated as an unauthoritative resource. Thays all. Of you find success I’m happy for you.

Fact remains most people I speak to (usually about 40-60 any given day.) Have zero clue how to use an LLM effectively and start out doing very bad things… Including trust with no verification. Flat out #1. Fire and forget without checking. Then when problems arise they have ZERO clue how to fix it.

If you use it don’t belive anything it says about ha yaml and instead use the generated code to go through line by line and determine how it works… Or doesn’t. The second part. The learning how it works is what most people skip out of sheer laziness… It’s also the most important.

1 Like