If AI systems get better, we will revisit this rule. It's time

AgentGPT, Perplexity.ai, and hundreds of cutting-edge tools work beyond-amazing now.

Was the “we will revisit this rule” a genuine statement? The “Leave you AI at the door” statement sounds more like an arrogant and emotional kneejerk insult than a rationally conceived rule? Telling everyone what they can and cannot do isn’t a very socially acceptable way to behave, especially when it’s a ban on the world’s most influential technology ever conceived.

Heaping ridicule on A.I. is the exact opposite of what a community should be doing in the A.I. space right now. We either embrace and evolve with it, or we go extinct at the hands of others who do.

1 Like

It’s been 3 months! and 3 months ago AI was often producing made up yaml that didn’t work.

maybe provide some points on why AI will bring value and not detract now.

1 Like

I would give it 6 months to 1 year to works the issues then the bugs out before plunging in. JMO

Did you actually read the post…? The “ban” is on taking answers directly from AI sources, without testing them and without attribution. If you find an answer to a question via AI, test it to make sure it works, and say in your post where you got it you can post it.

There are still daily examples here, on Discord, and the FB group of users getting hallucination answers from AI sources and being frustrated that “Home Assistant is so hard that even AI can’t make it work”.

2 Likes

and they shouldn’t use an AI to write the answer. :wink:

2 Likes

This point has been made elsewhere, but some people don’t seem to get it. So-called AI systems do not “work” - they only look as if they work.

GPT systems are based on monumental amounts of data, but there is no sense at all in which they can be said to understand it. They perform a statistical analysis of which expressions normally follow which other expressions. This makes them very, very plausible and quite often right, but there is really no difference between AgentGPT and an infinite number of monkeys. Possibly game changing, but as @Didgeridrew says, you’ve got to check, test and give fair warning. No, it’s not time.

There’s a good piece about the value of AI in coding here:

The irony with that is that these ‘AI’ systems basically reuse what other humans have written before, without actually understanding anything, just copying, combining and rephrasing things that have already been written by a human. As more and more people fall for the hype, reposting AI generated stuff left and right, the generational AI’s data pool will be progressively contaminated and diluted, like genetic incest. As their own generated stuff will end up in their training data, they will start regurgitating the same things again and again in increasingly Frankensteinian ways, amplifying bias along the way. It’s going to be a huge mess.

2 Likes

A bit like “Because you watched…” on Netflix. :rofl:

1 Like

“It’s time”.

I’d actually argue that it’s never going to be ‘the time’. Generational AI systems should never be allowed in this context, simply due to the fundamental way they work. They can never be reliable, due to the fact they’re nothing more than a statistical model. I am really shocked by the amount of people, including family and friends mostly in the non-technical field, but also some techy colleagues, who actually think these generational AI models are ‘intelligent’ in some way or apply some kind of cognitive processing. The real dangers are not these models, in fact they’re pretty interesting technically. It’s the people who completely misunderstand what these models are, what they do and what their limits are. They cannot generate anything actually new. People just don’t seem to get this. They just regurgitate, recombine and recycle existing stuff and make it look new.

2 Likes

Well, that as a FIPO…

It’s not all that shiny as it looks: if an important person in the field of AI steps away from a big company to be able to give his opinion of the consequences of AI, that’s a sign.

Netflix again! :rofl:

Most of us do this most of the time. An IQ test is just an exercise in next token prediction. The problem with “AI” is that it has no way of checking that its output is correct or plausible - in fact the question doesn’t even arise. Understanding is not part of the algorithm.

1 Like

Yeah well, thankfully there’s still a bit more to the human intellect than that, or we would still live in caves and hit each other with stones and sticks :stuck_out_tongue: But as you said, even if we’re reprocessing existing information, we have reasoning. We constantly ask ourselves (even subconsciously) does this make sense when presented with information, because we (usually) understand the context. These generational AI models don’t do any of this. They’re essentially glorified autocomplete algorithms. And before the the GPT hype people scream that the next GPT release is surely going to do this, no it won’t. Because, as you mentioned, that’s not even remotely part of how these models work. Nor do we even remotely understand how that part of the human mind works either.

These models have their uses, as a new and very nice form of a search engine for example. But their results should always be taken with a grain of salt and should be fact checked. Exactly like a result from a traditional search engine should.

1 Like

What’s more interesting is the total absence of “source” , which puzzles me that any thinking person/authority can even consider GPT/AI as an option, as it just complicates the verification of the presented text.

Have anyone asked GPT where it found it’s presented information ? , or requested a full list of the sources for it’s babble

It appears to be making those up as well.

It’s tragic that Google and Microsoft are so keen to implement this in their “products”

Wow. That’s really concerning. That’s what you’d call a fake news generator, in the literal sense of the term.

Ask it to play itself in a game of tic-tac-toe

Not to worry. Apparently it’s already too late.

The 'Don't Look Up' Thinking That Could Doom Us With AI | Time?

1 Like

It’s not time, they still pump out junk for HA. Source: Our mod team spots and removes AI generated “answers” daily from new users thinking they can game the system. This is still happening across all platforms. So, no, it’s not time.