Want to help others? Leave your AI at the door

Thank you!

Using an online AI for an ecosystem that’s purpose is offline local control is pretty strange anyways. I guess some people want to just use HA as a bridge for all of their tech and not offline.

btw, pot and kettle :slight_smile: :

source: https://chat.openai.com/chat#

which was answered to me when asking about:

can you explain why card-mod, or more precisely, the mod-card option causes Swiper card to skip certain cards, as described in swiper-pagination-color not applied to 'fraction' · Issue #66 · bramkragten/swipe-card · GitHub

1 Like

… And when/if they “click” Did not solve, then they will be provided with url’s to relevant Documentation, provided from their mentioned configurations/integration/problem description etc. in User’s “request” … that way they might learn faster , howto describe their “use case/problem” with relevant information before opening a Topic ( In fact they should not be able to Open a new Topic, if they haven’t asked ChatGPT , the same question ) :grin:

The problem is that it costs mods a lot of time to police it. It’s not like ChatGPT answers are occasionally wrong, they are always wrong. So what’s the point of a partial ban? Btw there is no wrong in asking ChatGPT for an answer, then checking yourself if it works and making adjustments as needed. Just don’t copy paste the whole text as it does more harm than good.

13 Likes

Interesting discussion. I’m not a fan of blanket bans, either, but I see no alternative here. It’s like a blanket ban on spam or posts advertising unrelated commercial services or links to porn sites. It’s not a moral judgement about those things, just that this isn’t the place for them. The mods even said it wasn’t a permanent ban; it will be revisited if things change.

Thank you, Mods, for all the hard work and for taking this step to keep the forum useful!

Again, this isn’t a judgement against AI. If and when it becomes good at answering, you can certainly go to ChatGPT with your HA questions instead of this forum. Maybe this forum will no longer be necessary then. That’s OK. Times change, technology changes. I’ll change with it. We all will. That’s why we like this stuff.

13 Likes

Could not agree more.
Chatgpt generates a ‘looking good’ answer out of statistic mulling.
For those that are old , you may remember a previous attempt at this kind of programs: expert systems

Similar with regards to objective , different, very different in the making.
Where Chatgpt is a complex statistic and optimisation program which does not even attempt at modelling the underlying concept, expert systems were modelling the problem through semantics and ontologies. Basically needing to understand the problem to give an answer.

For this reason they were difficult and costly to maintain, limited in scope but more exact.

The moral: no free lunch.

The danger: if people start using just that the noise level in the data will progressively get higher until results will not be even near looking as good.

Use your knowledge and brain

3 Likes

Garbage In - Garbage Out

1 Like

The use of any technology has good and bad potential that can sometimes be unimaginable. Legal and ethical norms should direct and guide us to ensure more potential for good. Our HA Comunity Forum guidelines are meant to do the same. There is nothing wrong with trying to make this community a better place by evolving these guidelines.

4 Likes

Yes. It also wastes the time of experienced users to correct nonsense generated by ChatGPT.

As I mentioned earlier in this topic, my first encounter with a ChatGPT-generated reply was in December. I was unaware it was produced by ChatGPT only that what the user was proposing was completely false (suggesting to use a non-existent option).

Despite the fact I demonstrated the proposed option wasn’t documented and caused Home Assistant to report its presence in the configuration as an error, the user never replied to my evidence and continued to promote the use of an option that doesn’t exist.

Why? Probably because they assumed ChatGPT is infallible and they had no experience with what ChatGPT was proposing so they couldn’t explain or defend it. In other words, they took no responsibility for it and were simply acting as a courier.

9 Likes

Fully agree with this. As noted, Stack Overflow did the same. Good thing we did it here.

GPT is certainly impressive from a technical point of view. But for real world scenarios, especially around code, it’s virtually useless. Every time I saw someone claiming that ChatGPT somehow made their code better, faster, less buggy, more functional, etc, and I looked at the code, I found that it actually made it worse. Sometimes hilariously worse, sometimes dangerously worse by introducing subtle bugs. You’d be lucky if your code still runs after being mutilated by ChatGPT. By the way it works, GPT has no clue about context. It doesn’t actually understand anything. But context and understanding of the global high level view is exactly what you need to improve, correct or refactor code. There are tools specifically designed for this. They analyze your code, they understand the semantics and take action according to this. They are lightyears ahead of this nonsense ChatGPT produces. Maybe someday we will find a specifically trained model of GPT added to Visual Studio (since MS invested in OpenAI). But until that happens, I’ll stay far away from GPT for this usage (and most other usages too).

Thinking about it. ChatGPT is a bit like one of these (human) impostors, real or fictional, who managed to be lawyers, surgeons or airline pilots, just by being persuasive, bullshiting their way through using the right plausible and assertive rhetoric, while having no clue about what they’re actually doing. That can be pretty dangerous if you think about it.

6 Likes

If we are going to ban ChatGPT (not if it seems), then I would suggest banning the phrase “just Google it”, which too often leads to incorrect/old/outdated information that sends people down a rabbit hole. Takes sip of tea…but that’s none of my business

3 Likes

Can someone explain to me, WHY are we using ChatGPT? I honsetly never heard of it till now.

Are real people, who also post real responses, using ChatGPT?

So for example, Im a real person, I post real stuff. But then someone asks a question and I dont know the answer. Rather than just not answer, I type the question into ChatGPT and then ctrl-V the ChatGPT response?

Is thats whats happening?

Yes this is what’s happening. This is the problem. Use chatgpt if you want to, just don’t post it here as a response to someone’s question.

9 Likes

Sounds like the ban is a good idea. ChatGPT responses are all over the shop. Next to useless for HA, but it generated a complete working ESPHome yaml for a device with a temp sensor and connected display. Possibly useful as a tool but not the thing to use to answer forum questions.

My biggest bugbear is that it will provide perfectly correct and sensible code snippets and algorithms for C++, but them throw in a call to some completely non existent library or function, rendering the whole thing useless.

Reading all of this ChatGPT can be banned. I have not read one argument in favour. Close the thread! :laughing:

1 Like

I agree answers from ChatGPT should be banned for the reasons outlined.

However the question needs to be asked, if the bot has read all the HA docs and still gets it wrong, what hope is there for the rest of us :slight_smile:

[before I get more likes for this post, it seems that chatgpt is only trained on documents up to 2021, so in the fast moving world of HA, it will be outdated already.]

12 Likes

Even if ChatGPT gave “correct” answers, it was only trained on data until 2021. HA has changed a lot since.

4 Likes

You could have stated it as a friendly request not to use ChatGPT because of the problems it creates, not give us a command. This is still an open source community, and you dictating rules about where people source their information are not appreciated.

3 Likes

Then start your own forum where chapgpt is allowed. All open source community forums have rules, a lot of them much tougher than here.

18 Likes