oh,no。
i have try,but no response from chatGPT when i ask some haas config questions
oh,no。
i have try,but no response from chatGPT when i ask some haas config questions
I read this and thought of all of you.
Enthusiasm for chatbots has also – bizarrely – made it acceptable to outsource baseline mediocrity to machines instead of humans.
That line (emphasis mine) is fantastic! Thanks for posting.
The rest of the article, however, was itself mediocre and not very funny. I thought it might have been an AI creation but frankly I’d expect more from AI. And it didn’t end with “it’s also important to keep in mind…” so obviously it wasn’t from ChatGPT.
I don’t know, seems like it’s doing more then that.
After reading that article, I definitely understand the position of not allowing it in the forum.
It really is. You can go read layman articles on the subject. Those working in the field knows that’s really the core of it. It’s the enormity of the data that makes this work. It’s also the reason you can’t trust it without verification. It’s just a really big parrot.
It works well for aggregating informations but not well when it comes to make assertions, understand rules or implicit content.
It’s only useful to produce text, it failed at producing “ideas”
But maybe some kind of autotests, or scheme of yaml will detect wrong answers of AI in feature?
What do you mean?
Generally the code produced by ChatGPT will throw error.
But if you meant automated modération you will have some false positives issues
It lies and lies.
It appears to try and manipulate people’s emotions
So, it’s basically an electronic version of a psychopath
Stop perpetuating this nonsense. There are prompts before that primes the bot into saying “creative” things. You’ve proved nothing and obviously haven’t tried this. Doing this is worse than the nonsense the bot can conjure up.
They used that in an Arte documentary,
Out of context we just feel like that’s people using this argument haven’t understood what’s ChatGPT and how it works.
And indeed there’s a lot of people who just might think that ChatGPT simulate human intellect, we better teach them how it really work instead of just displaying the failures of the bot.
I think part of teaching people how it works, is showing where it breaks down. What AI cannot do is as important to know as what it can. Yes, some of these examples are carefully set up to lead the AI down the wrong path. But exploring these frontier areas adds valuable insight.
Yes, but the example given that I was responding to was misleading. You don’t actually learn anything from that particular example unless you know the context. If you want to illustrate the nonsense the bot can conjure up, that isn’t the example to use. It actually does exactly what was asked in that case.
Surely the point of AI should be that it can be given a crazy question but then turn around and say, “well, there aren’t cow eggs” or whatever, not come back with complete nonsense.
That would require it to understand the question. But it doesn’t understand anything, it’s just humongously large statistical processing coupled with a really good language model.
Dave, you too clearly haven’t tried this. Please read what I said before: It doesn’t do that when you ask that question straight. When you ask it only that question it will tell you there’s no such thing as cow eggs. It actually gives a sensible (the probable) answer. You need to prime it with more questions to trick it into creative writing. That screenshot is deliberately cutting off the prior context to make a joke or something, but it’s a stupid joke, because it’s wrong. Wrong stuff shouldn’t be perpetuated. The bot does funny things, but that’s not one of them. Proving a point with bad data is bad science, even if the point is somehow otherwise right.
Go, and please try it.
Correct, I haven’t.
ok cool. That sounds much better!
Surely though the AI should be created such that it won’t get fooled into producing ‘creative writing’… Otherwise it just seems rather ridiculous.