Want to help others? Leave your AI at the door

Hey Marius, is that really you or ChatGPT? :wink:


nope, its me. (didnt even think of asking Chat Got now…)

it’s just me being very alert for generic rules of things being ‘not allowed’ in society

That all the experts and admins will get frustrated and leave. Because its hard enough trying to help all the users that need help, now they also have to identify when the first (usually quite long reply) isn’t actually helping the user but is really gibberish throwing them further off track.

And as the post says:

Fwiw, we don’t actually have to exile ChatGPT. For example, here’s how AI could possibly be made useful on here. When a user posts a new topic one of the categories that results in an answer (Configuration , Installation , etc.) give them a button that’s basically “what does the AI say?” Generate a response to their query from ChatGPT and show it to them. If the user says it solves their problem it gets posted and marked as the answer, if not then it is ignored and they wait for a human to come along.

That could actually be handy. The user knows they are getting an answer from an AI, if it doesn’t work they know its probably not their fault, if the post is actually posted as an answer it is clearly attributed to the AI and they are automatically filtered out by the user if not useful.

But having other users generate responses that are actually an AI rather then them and posting them unverified to the forum as if they said it? How is that useful? The OP was looking for help and now is probably more frustrated because they got something that sounds helpful and probably isn’t and they already thought they were doing something wrong. The mods/experts are frustrated because they have to work extra hard to identify the BS and tell the user “no you’re not wrong, the replier is, let me actually help you”. Its just a giant timesuck for everyone except the person who made the response and flung it up there.


So, how should we handle people who use ChatGPT responses as their own when answering questions? Not including Facebook / Mastodon / Discord / Reddit, we’ve had a half dozen or so users in the last week passing ChatGPT responses as answers in multiple topics/posts. I’ve personally deleted about 30 responses, I can’t speak for other mods. Do you think we should let that continue?


How can you tell? I looked at ChatGPT a few weeks ago and was surprised that anyone could believe the response wasn’t more than a copy of WikiPedia.

When I answer a question I clearly indicate that I am not an expert. Sometimes I am wrong, but I am quickly corrected. (Sometimes by you).

Take a look here:

It’s not easy. It takes a veteran like me (or others) to find them. That’s how convincing it is. It’s also how dangerous it is.


I saw that after I posted the question.
When I post a solution or an idea for the question, it’s almost something I’ve already done myself. If it’s a guess, I say so.

I am too illiterate in YAML to be mistaken for intelligence. Artificial or otherwise.

What ever you do: do not forbid .

Are we going to forbid (and ban… ) all other stupid/wrong posts too?


Perhaps there is something to learn from these answers?
These answers are generated from reading the docs then adding their own twist to it. Although some parts are not correct such as binary sensor update and such perhaps the answers can be used to get what the naming convention is of HA so that we don’t get service calls, and such named “correctly”.
I’m thinking for example on states() and state_attr().
states() get one state so it should be in singular, where as state_attr() can get multiple states if they are in a list.

After all the reason the AI got to these service calls or use the word platform too often is probably because it’s the most logical. (But I don’t agree with binary sensor update (perhaps there are more I don’t agree with I haven’t read them all))

While Home Assistant content is too niche for a general language model to do well at, we do have a large dataset of HA questions and responses to fine tune a custom model! :thinking: Would be a fun project for a data scientist like me


Yup, this morning I had somebody who had been given the following automation by ChatGPT:

- id: 0c336269f757478a87c641a1ba16fab3
  alias: Check for updated Raspberry Pi 4 stock
  - platform: time_pattern
    hours: "/2"
  - condition: template
    value_template: '{{ states.sensor.pi4locator.attributes.entries[0].title != states.sensor.pi4locator_latest.state
  - service: sensor.set
    entity_id: sensor.pi4locator_latest
      state: '{{ states.sensor.pi4locator.attributes.entries[0].title }}'
  - device_id: 72a3e4b521ce208c4787acd81925ab35
    domain: mobile_app
    type: notify
    message: Raspberry Pi 4 Stock Update

I’m fighting for hours now, with this automation:
(Error: Message malformed: extra keys not allowed @ data['0'])

Now, at first glance that was clearly - to me - ChatGPT generated gibberish. They didn’t know that though, and had spent hours trying to make it work.


No, people are allowed to be incorrect. People also usually state this by saying “I think…” or “I’m pretty sure…”, or “did you try…”…

With ChatGPT the responses are “Do this…”, “First start by…”, “Doing this will…”.

People are not allowed to copy/paste answers from ChatCPT and pass that off as their own knowledge. The responses from ChatGPT are typically confidently incorrect, and that’s the problem.


It’s the level of detail. ChatGPT posts aren’t being banned for being stupid/wrong, it’s because the level of stupid/wrong detail they provide. Most wildly incorrect replies do not intentionally mislead users with entire blocks of incorrect YAML and detailed step by step instructions that simply do not work. The “totally wrong” posts by the humans tend to either be something that definitely works but is completely off topic or vague couple sentence instructions that is incorrect.

That being said, I would personally flag any post that was step-by-step instructions where the steps were totally wrong and non-functional. Anyone who would do that is not a helpful or productive member of the community as they appear to be deliberating frustrating already frustrated users. That is a pretty malicious thing to do, regardless of the source of the content.


Hmm, ok, I get that….
Personally, I had been experimenting and posted a few experimental releases of custom-ui (see
my bio) explicitly mentioning GPT.

I would hope someone to point and help me with the obviously wrong code it suggested.

1 Like

I think we’re going to need reputation systems very soon where we can identify people who reliably provide unique insight, and those who are “unknowns”.

ChatGPT is going to be far too attractive for new members to try and boost their reputation, and it’s only going to get harder for new/novice users to identify as time goes on. It’s unreasonable to expect @petro to review every answer provided for dodgy platform keys.


we kind of already have that in our “solution” system and our liked posts.

It’s not displayed prominently anywhere for a user so you need to dig into the users profile to find it.

maybe we could have that rating be posted somewhere in the reply so other users can easily see it.

1 Like

It’s always a shame to see cool new technology being used for crappy rather than good purposes.

ChatGPT is pretty cool. Shame bad actors have to spoil it for others.


That is perfectly fine if you try to help and eventually wrong. Even now I might be wrong.
When you try to help, you at least help with something which is your best knowledge at this point, if eventually this is incorrect you learn and improve, if AI was giving the answer by himself and getting feedback it would be somehow acceptable but this is not the case.
I have seen many answers which it only takes 0.5 seconds to understand they are auto generated. They are not even slightly wrong, they are totally incorrect. I hardly believe someone would give such answer.

1 Like

I suggested to create a channel / topic named “Random Answers” so people will be able to use whatever tool they want to answer :rofl:

1 Like

Guess then opening post is way too heavy on the Ai side of things and doesn’t point to the real issue at hand.

Don’t use Ai as a panacea for all, and certainly don’t do that blindly.

Also, don’t post auto generated code and state you wrote that.

Other than that, we should embrace new technology, especially this promising and powerful.

This whole ‘forbidden/ban’ stance is completely over the top, in an ecosystem where we applaud each autogenerated code checker in VSCode, and jump in awe when dev tools autogenerates a mere mdi icon.
(there are even 2 chatGpt extensions for VSCode btw, no way stopping that)

Let’s just stay open minded and curious.
Bans are always bad. Period. No matter what.

1 Like