Want to help others? Leave your AI at the door

Today we’re introducing a new rule for the Home Assistant community: it’s no longer allowed to use ChatGPT or other AI systems to help others.

Although these systems generate elaborate and well structured answers, they are wrong. Often they are wrong in subtle ways which only someone with the right expertise could detect. And those people wouldn’t need AI to have written that answer.

We appreciate that people want to help others, but if you don’t have the knowledge, leave it to someone else. Giving an incorrect answer makes things worse. You are wasting everybody’s time, including the person asking the question. Trying out an answer that doesn’t work is frustrating because you think you’re doing something wrong.

Using an AI system to generate an answer while not providing attribution to AI will result in a ban. If you use attribution, we will delete your post and issue a warning. This also means suggesting someone “ask ChatGPT” is not an acceptable response.

If AI systems get better, we will revisit this rule.

This is a companion discussion topic for the original entry at https://www.home-assistant.io/blog/2023/01/23/help-others-leave-ai-at-the-dor/

chatAI is great and amazing, etc, but I have definitely been led astray on some code that it suggested. I’m actually really curious why it’s getting it wrong, I wonder what source docs it is referring to that have incorrect information.


From a moderators standpoint, we’ve had to deal with multiple accounts in the past week using ChatGPT to answer questions. I’ve yet to find anyone provide a reasonably correct answer using ChatGPT.


The documentation it does have is a year and a half old, but it’s “intelligently extrapolating” from that to create random non-existent services, functions, etc.: resulting in gibberish.


Yes, earlier we had somebody using it who was given the sensor.update service call, to change the state of a sensor entity to the given value.

They were confused why HA was throwing an error…


I can instantly tell an automation is created by chatGPT because it missuses the platform key just about everywhere. It assumes this is a universal key and tries to use it in services, conditions, and outside platform integrations. Making literally every automation wrong.

Not to mention the made up services based on what it assumes HA can do. I’ve seen… ping, states.set, states.get, binary_sensor.turn_on, binary_sensor.turn_off


It’s especially funny when you ask a vague question and it starts coming up with things that don’t exist. The other day I saw a popular reddit post about a certain template filter. People were amazed ChatGPT could come up with something like this and it got many upvotes, but when I tested the code it didn’t even work. It’s literally hallucinating bullshit.


Why would it ever be remotely acceptable to have AI write answers for you to publish? If I wanted to have ChatGPT respond to my query, I’d ask it directly, not come to the forums.

Don’t get me wrong, ChatGPT has its own pros (I would love to have it integrated with a voice assistant as it would increase the fidelity of the responses, and allow for more dynamic queries, or even ongoing discourse), but that’s very much different from having the forum spammed with ChatGPT-generated answers, especially knowing that it “stopped learning” in mid-2021, meaning a lot of the information it would convey is out of date.


It wouldn’t ever. The risk is that people a) incorrectly assume it produces valid code; and b) want to appear knowledgeable and helpful — and therefore use it to “suggest” solutions.

1 Like

Pardon me for raging, I just find it weird that some people find it acceptable, and therefore find it weird that it needs to be spelled out. It’s literally regurgitating unverified advice as recommendation. And unlike pointing to another person who could be wrong, there’s no way to make AI responsible for the (bad) advice they give.

Nonetheless, given the apparent widespread issue, I’m glad it’s specified. Some people are just plain dumb, or ignorant - and I’m unsure which is the worse.


Welcome to the public internet.


It’s worse, it’s taking unverified advice, mangling it beyond recognition then spitting it out as an answer.

Even if the original source was correct, it’s extremely unlikely the end result will be.

1 Like

Thank you!


I feel like you’re assuming this is someone trying to be helpful and failing at it. Let’s look at another possibility, if I was a spammer I am well aware that modern forum technology has all kinds of things in place to identify and remove spam posts from new accounts quickly and efficiently.

But if I hook up ChatGPT to a new account first, here’s the potential quickly build some credibility without actually putting in any more effort. Sure the posts won’t stand up to close inspection by an expert but they get likes, replies, discussion, etc. And they probably slip past any existing automated filters because the content appears to be on topic. In like 12-24 hours if no expert has spotted and flagged the posts then this looks like a real account and can now likely be used more nefariously with a lower possibility of getting caught.

And if it is caught along the way? Well the spammer didn’t really lose any of their time and its no different then if they had just gone with the original plan - try to post their ads/whatever from a new account.

StackOverflow had to do the same thing. That one makes more sense then this forum since credibility there is a real prize. Users with an account with high credibility can actually put that on their resume. So quickly building some and then taking it over manually from there makes a lot of sense as a strategy, even for a non-spammer. Well in theory, they caught on to it real quick.

Anyway point is I would expect this is going to be a common problem on forums. ChatGPT can quickly and easily generate posts that appear to fit right into a discussion and only close inspection identifies they are nonsense. Sounds like something that spammers and malicious actors would find immensely useful unfortunately


This topic is actually fascinating.
But what I want to know is whether ChatGPT ever asked a techincal question about HA and had another ChatGPT try answer it with the wrong answer? And then did the original ChatGPT to attempt to display emotion in response to the error?
Because that would be a worrying progression. :rofl:

although I hadn’t spotted AI generated posts here in the community, I find it real hard to +1 the new rule.

?? What on earth is happening here? What are we afraid of now?

In stead of welcoming, embracing and trying to improve upon an immensely promising new ‘tool’, we are forbidding it?
Where are we, in the Middle Ages, when the introduction of Science caused heads to roll?

Sure it currently has its faults, its idiosyncrasies and probably yes, its faults. It’s disruptive, sure. Concepts like ‘Great reset’, or ‘paradigm shift’ come to mind… not declaring things being unacceptable (unless lying about the origin of some code ofc)

They will soon be eradicated because of the sheer speed and investments being pointed towards it. It will reshape our society in a way we cant not foresee just yet.

Cmon HA System admin, don’t forbid anything. Open up, and join the new world.


You mean other than people continuing to use ChatGPT to generate “answers” (that are uttery garbage) they then pass off as their own?

It’s happened many times already, causing confusion when things haven’t worked.

But, you’d have worked that out if you’d read the original post :wink:


Was just about to post the same…lol.
Outrage by default strikes again.


As they say in the blog. I understand the mods frustration with faulty posts today that could potentially explode in numbers if not introducing some rules.