Want to help others? Leave your AI at the door

Probably best to update the Community Guidelines with new requirements
Guidelines - Home Assistant Community (home-assistant.io)

3 Likes

Sounds fair to me!

2 Likes

AS A NORMAL HUMAN, I ALSO CONCUR THIS SUGGESTION. beep

5 Likes

I too am a human - here I’ll prove it. aaaahh - I just pricked myself with a pin and it hurt.
A couple of bobs here - poor mods for having to continously trawl for idiot answers. I agree with the proposed ban.
A couple of posts back mentioned if the AI has read all the docs why do they get the answers wrong? I’m not sure but what if we could train the bugger to read all the forum posts? - this is where the real learning for me happens as responses are made in my language - not that of a programmer.
I have just had a look at AI tonight and thought - interesting but as mentioned previously - it’s wikipedia from a blender.
Interesting times … ouch that oen hurt

Agreeing with the general guideline we shouldn’t use those tools as sole alternatives, and surely should not lie about the provenance of auto-generated code. Write a line or 2 it currently is not perfect, and new users should not c&p blindly…

Other than that, why not just look forward to it. Use it, experiment with it. Discover errors, but also discover very good ideas. Soon we will be offered completely new paradigms, we now can not yet fathom.

I would expect many of the HA documentation could use some auto-generated improvement/corrections.

For the sake of the learning experience, why not open up a new section in the community dedicated to experimenting with it. Showcase some great ideas put forward by ChatGPT/other AI systems. Certainly also some common mistakes. (and I dont mean the silly and endlessly repeated prime number/will programmers soon be out of jobs jokes)

Don’ t fight it, join forces, make it better.

1 Like

Did you just signed up here to say that ?

4 Likes

I guess this is not the case/topic here
( and yes i agree with you in regards to improving various HA documentation pages, maybe from some “experienced” user ( who is used to write documentations ) edit: To end-users

Im pretty sure even you would get pissed, if you “encounter” an interesting( or for you relevant ) Topic, and found a “scroll-able” Answer ( or several “a conversation” ) And you like after 3-5 in reading get more and more confused/hesitative but keep reading until you can’t “hold it” and join the “conversation” maybe to “correct” some “faults” in the arguments/solutions delivered in the Topic , And after awhile , let’s say 15min/half an hour, you realize that these “answers” you get back is “cp bullock from A CP” sorry for the use of example, but in other words you been "fighting bullock/“false solutions” copied/pasted by someone who really don’t know, or by someone who find it “funny” to piss people off with bullock

1 Like

This post has driven ChatGPt’s traffic.

I fully agree blndly posting untested code shouldn’t be done. AI generated or not. The post is very confusing though. The line “Using an AI system to generate an answer while not providing attribution to AI will result in a ban” is very different from “It’s no longer allowed to use ChatGPT or other AI systems to help others.”

I would propose that the rule is something like this: "If you use an AI system to generate code in order to answer a question in the HA community, the following conditions must be met: 1 Before posting, you should verify the code works in a situation similar to that pertaining to the question 2 You should mention the code was AI generated and include the AI tool and phrasing of the problem to the tool. 3 Non-functional or non-verified AI-generated code should be clearly marked as such and is only allowed as material to illustrate discussions about AI-generated code " I think that should cover the problems and also allow for exploration of which AI tools to use and how to use them to automate code generation.
edit: this isn’t particularly meant as a reply to boheme61’s message, but as a comment on the blog post, sorry. I don’t know how to edit that.

You still don’t understand the ban. You are not allowed to use ai generated answers and pass them off as your own answer.

4 Likes

Although this part is taken out of its context I’m sure other people will do that at times again.
In this case I deliberately took this part out of its context which was “If AI generated…” but having the above snippet as rules will just create confusion and people will be mad at each other for posting an answer without testing (human generated).

Many times you can’t test code and you just have to post a guess, this could be a scraper or an integration where you need an account and such.
It’s of course always better to test your code before posting but having that part as a rule (even in a context with AI) is not something I agree with.

I would rather say no to AI, and I do say no to AI

2 Likes

sure I do understand that, and agree, as repeatedly posted above.

I am only advocating an open curious mind. and not closing it down.

stating ChatGPT only produces nonsense, is useless, etc etc is plain harmful to progress and HA unworthy. Shortsighted.

opening post:

“dont post it as your own” would have been sufficient… maybe add: be careful

1 Like

It’s not really shortsighted if it literally does not provide a single fully correct answer. Personally, I’ve yet to see this with home assistant. I’m sure someone can get it to give a correct answer, but those types of people aren’t going to need the AI to generate automations.

2 Likes

yeah, have to agree there…hehe. took me some time and effort to formulate the correct questions to get the right answers…

yet, I was glad I did make the effort, because I had fun doing so, and along the process did learn a few things.

btw, this

clearly states we are not even allowed to post some auto generated code, attribute it, and next provide some human context…

This is all in regards to answering questions. Asking questions using ChatGPT is still allowed, but don’t be surprised when people question the ridiculousness of the automation.

5 Likes

But it’s posted in the “How to ask” guide.
I understand it needs to be somewhere but that is perhaps not the correct place.

How to help us help you - or How to ask a good question - Configuration - Home Assistant Community (home-assistant.io)

So, that’s just the title of that post. It’s forum guidelines. The bullet also reads

Don’t use ChatGPT, or similar tools, to generate answers that you provide.

At this point it seems we are getting into semantics, when the rule is pretty clear: Don’t use ChatGPT to answer questions.

8 Likes

I do prefer to assume the best (and prepare for the worst). Always an optimist.

True, I have not considered this possibility at first.

Yeah, the “reputation” angle on SO was much more apparent than here would.

Also, I have to ask… Was ChatGPT used to generate this announcement? Because (hopefully it’s not against the new rules, since I clearly state it’s ChatGPT generated), this is eerie:

And this isn’t even a nuanced query - with a few back and forths, I could easily get it to generate more or less verbatim the initial announcement.


And to anyone asking why ChatGPT is banned: this is why. It generates a HIGHLY self-confident, nuanced, very well detailed and seemingly human-written reply that is indistinguishable from human advice for the average user. Imagine that the forum was flooded with incorrect, but such detailed answers - it would become uncontrollable incredibly quickly.

As to how to tell if it’s ChatGPT generated… Look here.

Unlike image generation, where repeating patterns are part of the flair, the text written by ChatGPT (unless the query is suuuuper fine-tuned) will end up being repetitive and self-iterative. It might not be easy to spot a SINGLE reply, but when it comes to a thread of messages, it will be quite obvious you’re talking with a machine.

3 Likes

I explained the situation to a family member using a metaphor:

Let’s say that you play a game where you answer questions using a very specific combination of three languages (for example, French, German, and English). The game has strict rules for which words, and grammar, are used from each language. Any deviation from these rules produces an invalid combination of the three languages.

It’s not easy so to help you play the game you use AI. Unfortunately it proves to be unreliable because it doesn’t consistently follow the strict rules. It combines the three languages into something that’s superficially correct but ultimately wrong. However, it does present its answer very authoritatively which, for players unfamiliar with the rules, fools them into believing it’s playing the game correctly.


And if we set aside the metaphor, ChatGPT often makes a mess of the combination of YAML, Jinja2, and python that is used in automations, scripts, Template Sensors, etc.

6 Likes

If someone would like to have a ChatGPT answer, he/she wont be coming to the forum but will use the GPT himself/herself.
Other members should not be filling the answer with a GPT answer as its then just besides the point
If the person is kinda witty, it would try to validate the GPT answer on a forum as the user would not have the konwledge to validate the answer (which is a fact anyway as user didnt figure it out from the start).

So this approach leaves out the mess on the forum and still allow open mind to GPT to help in solving a question. (would be nice if there could be a feedback to the GPT model for the validated answer :slight_smile: )

1 Like