Want to help others? Leave your AI at the door

There seem to be a few misconceptions about how systems like ChatGPT work.

It’s true that they scan through monumental numbers of web pages for their source material, but there’s no sense in which they engage with the meaning of any of them. What they do is build a statistical model of what words normally follow other words. That’s it. It makes their output very plausible, but quite unreliable.

When we all get used to the idea, I can see them becoming accepted as a tool for producing first drafts. A person with real expertise will be needed to correct and rewrite. In the meantime moderators are right to be cautious.

There’s a good aticle on the subject here:

6 Likes

oh,no。

i have try,but no response from chatGPT when i ask some haas config questions

I read this and thought of all of you.

Enthusiasm for chatbots has also – bizarrely – made it acceptable to outsource baseline mediocrity to machines instead of humans.

1 Like

That line (emphasis mine) is fantastic! Thanks for posting.

The rest of the article, however, was itself mediocre and not very funny. I thought it might have been an AI creation but frankly I’d expect more from AI. And it didn’t end with “it’s also important to keep in mind…” so obviously it wasn’t from ChatGPT.

3 Likes

I don’t know, seems like it’s doing more then that.

After reading that article, I definitely understand the position of not allowing it in the forum.

It really is. You can go read layman articles on the subject. Those working in the field knows that’s really the core of it. It’s the enormity of the data that makes this work. It’s also the reason you can’t trust it without verification. It’s just a really big parrot.

1 Like

A post was split to a new topic: Why can’t I open this

It works well for aggregating informations but not well when it comes to make assertions, understand rules or implicit content.

It’s only useful to produce text, it failed at producing “ideas”

2 Likes

But maybe some kind of autotests, or scheme of yaml will detect wrong answers of AI in feature?

What do you mean?

Generally the code produced by ChatGPT will throw error.

But if you meant automated modération you will have some false positives issues

14 posts were split to a new topic: YAML validation

CharGPT: “What is the difference between cow eggs and chicken eggs?” (Test it your self…)

1 Like

It lies and lies.

It appears to try and manipulate people’s emotions

So, it’s basically an electronic version of a psychopath

Stop perpetuating this nonsense. There are prompts before that primes the bot into saying “creative” things. You’ve proved nothing and obviously haven’t tried this. Doing this is worse than the nonsense the bot can conjure up.

2 Likes

They used that in an Arte documentary,

Out of context we just feel like that’s people using this argument haven’t understood what’s ChatGPT and how it works.

And indeed there’s a lot of people who just might think that ChatGPT simulate human intellect, we better teach them how it really work instead of just displaying the failures of the bot.

I think part of teaching people how it works, is showing where it breaks down. What AI cannot do is as important to know as what it can. Yes, some of these examples are carefully set up to lead the AI down the wrong path. But exploring these frontier areas adds valuable insight.

Yes, but the example given that I was responding to was misleading. You don’t actually learn anything from that particular example unless you know the context. If you want to illustrate the nonsense the bot can conjure up, that isn’t the example to use. It actually does exactly what was asked in that case.

1 Like

Surely the point of AI should be that it can be given a crazy question but then turn around and say, “well, there aren’t cow eggs” or whatever, not come back with complete nonsense.

That would require it to understand the question. But it doesn’t understand anything, it’s just humongously large statistical processing coupled with a really good language model.

1 Like

Dave, you too clearly haven’t tried this. Please read what I said before: It doesn’t do that when you ask that question straight. When you ask it only that question it will tell you there’s no such thing as cow eggs. It actually gives a sensible (the probable) answer. You need to prime it with more questions to trick it into creative writing. That screenshot is deliberately cutting off the prior context to make a joke or something, but it’s a stupid joke, because it’s wrong. Wrong stuff shouldn’t be perpetuated. The bot does funny things, but that’s not one of them. Proving a point with bad data is bad science, even if the point is somehow otherwise right.

Go, and please try it.

2 Likes