Want to help others? Leave your AI at the door

Although this part is taken out of its context I’m sure other people will do that at times again.
In this case I deliberately took this part out of its context which was “If AI generated…” but having the above snippet as rules will just create confusion and people will be mad at each other for posting an answer without testing (human generated).

Many times you can’t test code and you just have to post a guess, this could be a scraper or an integration where you need an account and such.
It’s of course always better to test your code before posting but having that part as a rule (even in a context with AI) is not something I agree with.

I would rather say no to AI, and I do say no to AI

2 Likes

sure I do understand that, and agree, as repeatedly posted above.

I am only advocating an open curious mind. and not closing it down.

stating ChatGPT only produces nonsense, is useless, etc etc is plain harmful to progress and HA unworthy. Shortsighted.

opening post:

“dont post it as your own” would have been sufficient… maybe add: be careful

1 Like

It’s not really shortsighted if it literally does not provide a single fully correct answer. Personally, I’ve yet to see this with home assistant. I’m sure someone can get it to give a correct answer, but those types of people aren’t going to need the AI to generate automations.

2 Likes

yeah, have to agree there…hehe. took me some time and effort to formulate the correct questions to get the right answers…

yet, I was glad I did make the effort, because I had fun doing so, and along the process did learn a few things.

btw, this

clearly states we are not even allowed to post some auto generated code, attribute it, and next provide some human context…

This is all in regards to answering questions. Asking questions using ChatGPT is still allowed, but don’t be surprised when people question the ridiculousness of the automation.

5 Likes

But it’s posted in the “How to ask” guide.
I understand it needs to be somewhere but that is perhaps not the correct place.

How to help us help you - or How to ask a good question - Configuration - Home Assistant Community (home-assistant.io)

So, that’s just the title of that post. It’s forum guidelines. The bullet also reads

Don’t use ChatGPT, or similar tools, to generate answers that you provide.

At this point it seems we are getting into semantics, when the rule is pretty clear: Don’t use ChatGPT to answer questions.

8 Likes

I do prefer to assume the best (and prepare for the worst). Always an optimist.

True, I have not considered this possibility at first.

Yeah, the “reputation” angle on SO was much more apparent than here would.

Also, I have to ask… Was ChatGPT used to generate this announcement? Because (hopefully it’s not against the new rules, since I clearly state it’s ChatGPT generated), this is eerie:

And this isn’t even a nuanced query - with a few back and forths, I could easily get it to generate more or less verbatim the initial announcement.


And to anyone asking why ChatGPT is banned: this is why. It generates a HIGHLY self-confident, nuanced, very well detailed and seemingly human-written reply that is indistinguishable from human advice for the average user. Imagine that the forum was flooded with incorrect, but such detailed answers - it would become uncontrollable incredibly quickly.

As to how to tell if it’s ChatGPT generated… Look here.

Unlike image generation, where repeating patterns are part of the flair, the text written by ChatGPT (unless the query is suuuuper fine-tuned) will end up being repetitive and self-iterative. It might not be easy to spot a SINGLE reply, but when it comes to a thread of messages, it will be quite obvious you’re talking with a machine.

3 Likes

I explained the situation to a family member using a metaphor:

Let’s say that you play a game where you answer questions using a very specific combination of three languages (for example, French, German, and English). The game has strict rules for which words, and grammar, are used from each language. Any deviation from these rules produces an invalid combination of the three languages.

It’s not easy so to help you play the game you use AI. Unfortunately it proves to be unreliable because it doesn’t consistently follow the strict rules. It combines the three languages into something that’s superficially correct but ultimately wrong. However, it does present its answer very authoritatively which, for players unfamiliar with the rules, fools them into believing it’s playing the game correctly.


And if we set aside the metaphor, ChatGPT often makes a mess of the combination of YAML, Jinja2, and python that is used in automations, scripts, Template Sensors, etc.

6 Likes

If someone would like to have a ChatGPT answer, he/she wont be coming to the forum but will use the GPT himself/herself.
Other members should not be filling the answer with a GPT answer as its then just besides the point
If the person is kinda witty, it would try to validate the GPT answer on a forum as the user would not have the konwledge to validate the answer (which is a fact anyway as user didnt figure it out from the start).

So this approach leaves out the mess on the forum and still allow open mind to GPT to help in solving a question. (would be nice if there could be a feedback to the GPT model for the validated answer :slight_smile: )

1 Like

The important part is that ChatGPT is not trying to answer correctly. That is not its purpose. The “chat” interface is misleading us into thinking that way, but GPT3 is at its core nothing more than an immensely powerful auto-completion system. It essentially does what you obtain when you repeatedly click on the first suggestion of your smartphone in an SMS. It just does it way way better.

The main consequence is that its optimization goal is not to produce truth, but the most realistic possible texts. In a lot of contexts and due to its humongous learning corpus (The whole Wikipedia is AFAIK less than 5% of it), the most probable continuation of the prompt actually contains truths, but it will mix into these true facts some hallucinated ones that will seem true because told with the same tone of certainty.

(A good example I saw is when asked about the life of Descartes it gets it right but in the mist of it tells that Descartes went to America which is complete nonsense).

This risk is more prominent in maths or code where the most commonly found and probable followup can be completely wrong just because a tiny thing changed in the prompt (and due to the nature of probabilities it can be missed by the algorithm). An example is the classic “A phone with its cover costs 110$ and the phone costs 100$ more than the cover. How much is the cover ?” which ChatGPT will get wrong unless you specify that you want an answer from a maths teacher in the prompt.

A second possible factor is that Home Assistant content is probably too sparse in the GPT corpus which means that GPT will try to invent probable continuations using alike-looking languages and is thus “polluted” by their API.

6 Likes

That is fine but wouldn’t you just deal with them and educate users to enquire about where code comes from rather than ban outright. As any tool AI has a place and a time. I think there is a more differentiating answer.

Someone posing AI generated code as their own is is fraud and needs to be addressed same as anyone using anyone else’s code pretending it’s theirs / not crediting the originator.

1 Like

I already have a fulltime job.

11 Likes

As I read this the morning news radio programme which is discussing the banning of chatgpt in schools/universities and concerns over cheating (which is rife anyway). Quite ironic I thought.

Nickrout,
I once taught in a school library - admittedly at the beginnign cusp of the web as we know it (remember alta vista) - anyway one of the issues became after a few years the prevalence of students to use web scrapes and microsoft word essay tools to compile the essays. It was pretty good and allowed slabs of other people’s (unattributed work) to be included.
BUT - then the issue was recognised and a tool developed so that the essay had to be submitted into an anti plagiarism checker (turnitin). This caught up to 99.9% of those seeking to ‘game’ the system.
I guess in the end what I’m trying to say is that with this AI issue. Some person way smarter than me will recognise the issue and come up a clever solution. It may be that any post to the forum goes through a similar check process (this would also stop wasting moderators time) to identify AI derived answers.
Interesting times - damn I just pricked myself again with a pin

Fair enough, but if you can’t test it I think that should be mentioned. To me, the bottom line is that you post an answer that to the best of your knowledge is helpful, that you do a reasonable (proportional) effort to make sure that it is, and that you indicate where you are not sure. I don’t see why there should be a difference between AI generated code and human generated code in this respect. I don’t say no to AI, I’d love it to be at a point where I can ask a computer to write automations for me instead of dealing with the slings and arrows of YAML syntax. There are many things I say no to that are made easier with AI, like trolling forums with elaborate bullshit answers. I do object to that but not because of the AI.

… but are you sure you aren’t in a simulation?

image

1 Like

Yes I do, and it was awful. The internet (and particularly trying to make linux work in the early days) got so much better with google, before it became a tracking/advertising company.

I foresee a loop with AI being trained on AI answers, and pretty soon the bullshit answers will become canon (albeit wrong).

1 Like

Now this seems more like it.

so we are joining it :wink:
first steps, and that is very cool.