Why would it ever be remotely acceptable to have AI write answers for you to publish? If I wanted to have ChatGPT respond to my query, I’d ask it directly, not come to the forums.
Don’t get me wrong, ChatGPT has its own pros (I would love to have it integrated with a voice assistant as it would increase the fidelity of the responses, and allow for more dynamic queries, or even ongoing discourse), but that’s very much different from having the forum spammed with ChatGPT-generated answers, especially knowing that it “stopped learning” in mid-2021, meaning a lot of the information it would convey is out of date.
It wouldn’t ever. The risk is that people a) incorrectly assume it produces valid code; and b) want to appear knowledgeable and helpful — and therefore use it to “suggest” solutions.
Pardon me for raging, I just find it weird that some people find it acceptable, and therefore find it weird that it needs to be spelled out. It’s literally regurgitating unverified advice as recommendation. And unlike pointing to another person who could be wrong, there’s no way to make AI responsible for the (bad) advice they give.
Nonetheless, given the apparent widespread issue, I’m glad it’s specified. Some people are just plain dumb, or ignorant - and I’m unsure which is the worse.
I feel like you’re assuming this is someone trying to be helpful and failing at it. Let’s look at another possibility, if I was a spammer I am well aware that modern forum technology has all kinds of things in place to identify and remove spam posts from new accounts quickly and efficiently.
But if I hook up ChatGPT to a new account first, here’s the potential quickly build some credibility without actually putting in any more effort. Sure the posts won’t stand up to close inspection by an expert but they get likes, replies, discussion, etc. And they probably slip past any existing automated filters because the content appears to be on topic. In like 12-24 hours if no expert has spotted and flagged the posts then this looks like a real account and can now likely be used more nefariously with a lower possibility of getting caught.
And if it is caught along the way? Well the spammer didn’t really lose any of their time and its no different then if they had just gone with the original plan - try to post their ads/whatever from a new account.
StackOverflow had to do the same thing. That one makes more sense then this forum since credibility there is a real prize. Users with an account with high credibility can actually put that on their resume. So quickly building some and then taking it over manually from there makes a lot of sense as a strategy, even for a non-spammer. Well in theory, they caught on to it real quick.
Anyway point is I would expect this is going to be a common problem on forums. ChatGPT can quickly and easily generate posts that appear to fit right into a discussion and only close inspection identifies they are nonsense. Sounds like something that spammers and malicious actors would find immensely useful unfortunately
This topic is actually fascinating.
But what I want to know is whether ChatGPT ever asked a techincal question about HA and had another ChatGPT try answer it with the wrong answer? And then did the original ChatGPT to attempt to display emotion in response to the error?
Because that would be a worrying progression.
although I hadn’t spotted AI generated posts here in the community, I find it real hard to +1 the new rule.
?? What on earth is happening here? What are we afraid of now?
In stead of welcoming, embracing and trying to improve upon an immensely promising new ‘tool’, we are forbidding it?
Where are we, in the Middle Ages, when the introduction of Science caused heads to roll?
Sure it currently has its faults, its idiosyncrasies and probably yes, its faults. It’s disruptive, sure. Concepts like ‘Great reset’, or ‘paradigm shift’ come to mind… not declaring things being unacceptable (unless lying about the origin of some code ofc)
They will soon be eradicated because of the sheer speed and investments being pointed towards it. It will reshape our society in a way we cant not foresee just yet.
Cmon HA System admin, don’t forbid anything. Open up, and join the new world.
As they say in the blog. I understand the mods frustration with faulty posts today that could potentially explode in numbers if not introducing some rules.
That all the experts and admins will get frustrated and leave. Because its hard enough trying to help all the users that need help, now they also have to identify when the first (usually quite long reply) isn’t actually helping the user but is really gibberish throwing them further off track.
And as the post says:
Fwiw, we don’t actually have to exile ChatGPT. For example, here’s how AI could possibly be made useful on here. When a user posts a new topic one of the categories that results in an answer (Configuration , Installation , etc.) give them a button that’s basically “what does the AI say?” Generate a response to their query from ChatGPT and show it to them. If the user says it solves their problem it gets posted and marked as the answer, if not then it is ignored and they wait for a human to come along.
That could actually be handy. The user knows they are getting an answer from an AI, if it doesn’t work they know its probably not their fault, if the post is actually posted as an answer it is clearly attributed to the AI and they are automatically filtered out by the user if not useful.
But having other users generate responses that are actually an AI rather then them and posting them unverified to the forum as if they said it? How is that useful? The OP was looking for help and now is probably more frustrated because they got something that sounds helpful and probably isn’t and they already thought they were doing something wrong. The mods/experts are frustrated because they have to work extra hard to identify the BS and tell the user “no you’re not wrong, the replier is, let me actually help you”. Its just a giant timesuck for everyone except the person who made the response and flung it up there.
So, how should we handle people who use ChatGPT responses as their own when answering questions? Not including Facebook / Mastodon / Discord / Reddit, we’ve had a half dozen or so users in the last week passing ChatGPT responses as answers in multiple topics/posts. I’ve personally deleted about 30 responses, I can’t speak for other mods. Do you think we should let that continue?
I saw that after I posted the question.
When I post a solution or an idea for the question, it’s almost something I’ve already done myself. If it’s a guess, I say so.
I am too illiterate in YAML to be mistaken for intelligence. Artificial or otherwise.