Want to help others? Leave your AI at the door

It really is. You can go read layman articles on the subject. Those working in the field knows that’s really the core of it. It’s the enormity of the data that makes this work. It’s also the reason you can’t trust it without verification. It’s just a really big parrot.

1 Like

A post was split to a new topic: Why can’t I open this

It works well for aggregating informations but not well when it comes to make assertions, understand rules or implicit content.

It’s only useful to produce text, it failed at producing “ideas”

2 Likes

But maybe some kind of autotests, or scheme of yaml will detect wrong answers of AI in feature?

What do you mean?

Generally the code produced by ChatGPT will throw error.

But if you meant automated modération you will have some false positives issues

14 posts were split to a new topic: YAML validation

CharGPT: “What is the difference between cow eggs and chicken eggs?” (Test it your self…)

1 Like

It lies and lies.

It appears to try and manipulate people’s emotions

So, it’s basically an electronic version of a psychopath

Stop perpetuating this nonsense. There are prompts before that primes the bot into saying “creative” things. You’ve proved nothing and obviously haven’t tried this. Doing this is worse than the nonsense the bot can conjure up.

2 Likes

They used that in an Arte documentary,

Out of context we just feel like that’s people using this argument haven’t understood what’s ChatGPT and how it works.

And indeed there’s a lot of people who just might think that ChatGPT simulate human intellect, we better teach them how it really work instead of just displaying the failures of the bot.

I think part of teaching people how it works, is showing where it breaks down. What AI cannot do is as important to know as what it can. Yes, some of these examples are carefully set up to lead the AI down the wrong path. But exploring these frontier areas adds valuable insight.

Yes, but the example given that I was responding to was misleading. You don’t actually learn anything from that particular example unless you know the context. If you want to illustrate the nonsense the bot can conjure up, that isn’t the example to use. It actually does exactly what was asked in that case.

1 Like

Surely the point of AI should be that it can be given a crazy question but then turn around and say, “well, there aren’t cow eggs” or whatever, not come back with complete nonsense.

That would require it to understand the question. But it doesn’t understand anything, it’s just humongously large statistical processing coupled with a really good language model.

1 Like

Dave, you too clearly haven’t tried this. Please read what I said before: It doesn’t do that when you ask that question straight. When you ask it only that question it will tell you there’s no such thing as cow eggs. It actually gives a sensible (the probable) answer. You need to prime it with more questions to trick it into creative writing. That screenshot is deliberately cutting off the prior context to make a joke or something, but it’s a stupid joke, because it’s wrong. Wrong stuff shouldn’t be perpetuated. The bot does funny things, but that’s not one of them. Proving a point with bad data is bad science, even if the point is somehow otherwise right.

Go, and please try it.

2 Likes

Correct, I haven’t.

ok cool. That sounds much better!

Surely though the AI should be created such that it won’t get fooled into producing ‘creative writing’… Otherwise it just seems rather ridiculous.

1 Like

I maybe shouldn’t have said tricked — perhaps primed.

This is from a ChatGPT clone (also OpenAI) that I have access to (it’s easier for testing because it’s not limited, but ChatGPT does the same):

A ChatGPT screenshot I have:

3 Likes

@parautenbach This might sound like I’m being critical, I am most definitely not, this little exchange about the ‘cow eggs’ has been very informative.

Ironically though it has felt a lot like a conversation with an AI. Eventually we got to a point where for me at least I feel much better informed about the whole subject. Just like with an AI the right question had to be asked with the right ‘priming’ to reach a result that satisfied me.

But I suppose that just highlights the age old problems of; asking a well defined question, on-line communication (e.g. chat dooms, forums etc.) and now the new problem of knowing what an AI is actually doing and of course, understanding if (or at least how much) it is [artificially] intelligent in the first place.

1 Like

I’m really glad if it was informative. No problem at all. It was the facetiousness (dishonesty?) of the original cow egg post here, because it added no value to the discussion: No insights or explanations were offered and without intervention could’ve lead to a lot of “see, I told you this is rubbish” when the reality is more complex.

I promise all my responses where my own – except where quoted. :smiley:

1 Like

I think (and someone correct me if I’m wrong) that ChatGPT has been specifically advertised as proficient at creative writing.

Assuming that’s true, I could see where it could be prompted to generate fiction about cow eggs. A different implementation using the same GPT code base could be designed to help generate non-fiction prose, like scientific articles. That would look like the examples above where it refused to speculate on cow eggs. In other words, in both cases it generated the desired response.