I also think the original post is unclear, and it would be a good idea to edit it to better reflect the policy. Here’s what it says:
Using an AI system to generate an answer while not providing attribution to AI will result in a ban. If you use attribution, we will delete your post and issue a warning. This also means suggesting someone “ask ChatGPT” is not an acceptable response.
Under this rule, if I manage through smart questioning to generate a solution to a forum question with ChatGPT, test it, confirm it works and then post it, attributing the answer to ChatGPT the post will be deleted and a warning will be issued. That doesn’t seem reasonable and from the comments below the post from you and CentralCommand, it doesn’t look like that would happen.
But that is #4 in Mike’s response. It’s not part of the rule as written in the blog post. Personally I don’t think we should require users to read all the responses to find out what the rules are. Especially if there is someone with a big green moderator badge repeating in a later response that AI may not be used to answer questions, period. An answer that has been tested and works as a solution is still an answer.
You are completely missing the point. ChatGPT and the technology is hear to stay. However, it is not ready for prime time just yet and can cause more harm in the comminity discussions than good. If you want to experiment in your own space, have at’er, what ever floats your boat. But it should not be used here to inluence all the new HA’ers who are here to learn.
My 2 cents.
I’ve asked ChatGPT for advice on the matter and crediting the source, which I find to be correct and personally checked that to be valuable and relevant, this it what it suggested:
So, certainly, ChatGPT has its flaws, as it is still early days.
Otoh, as it is a language model, contextualizing humane text might be one of its explicit strengths.
There are benefits to holding your breath for a second in public, and simply privately throw some thoughts at it. As you suggested yourself:
Quite.
Focussing on solutions to the current Assist(ant) development, I’ll go back to the #beta channel and hope to learn and help how to bring that new functionality to the next level.
Please join
@WillS it’s literally wrong 100% of the time? Literally? You sure that’s what you want to go with?
And I wasn’t arguing against the use of Google. That would be insane. I was commenting on some people’s use of the phrase, which is often tossed around in a dismissive and demeaning way.
My point was providing someone with an incorrect ChatGPT answer can be just as misleading/frustrating as tossing back “Just Google it” to someone. Case in point, I just googled “How to install Home Assistant”, and the top 10-15 videos were from time periods ranging as far back as 2018. Highly doubtful that video still holds (now tbf, the first link is the installation docs, but some people are visual learners and will head straight to YouTube). Providing someone with a ChatGPT answer can be wrong (nothing is 100%). Telling someone to just Google it can also be wrong. They are flip sides of the same coin, imo.
In the context of providing technical/coding answers to HA questions, yes. I’ll add that my day job is an AI modeler well-versed in PyTorch and Tensorflow, so it’s not like I have some aversion to AI. I just don’t want it to be a source of confusion leading non-experts off on a wrong trail, then coming back with “why doesn’t my code work??”
And the HA documentation, examples, and community forum are always the place to start, instead of Google.
The big difference is that in Google/YouTube you can set a max age so you only get video that is a few months old.
So if someone said google it then there is actually a good chance you get a good answer if you search correctly.
In another discipline I’m involved in, we periodically have training, some of which is simply “things not to do.” Inevitably, when they’re going through those, someone relatively new to the training remarks, “Who would do that? It’s just common sense not to.” To which I usually respond, “The thing you have to understand about every one of these rules is that a least once, someone did just that.”
I’m glad the admins are taking this action. I’ve had a lot of good advice from these forums. I’d hate to see something like this make the experience worse for others.
ChatGPT is terrible at writing code. When I asked it to code a solution for me, the reply was so wrong that even I could see the problem. And I am YAML-challenged.
There is a filter in YouTube and Google.
I’m currently in bed so I can only see mobile version, but YouTube has three dots then filter.
I believe on Google it’s settings or advanced search
I tried to find a solution to a SQL search code for HA on this forum and elsewhere and the examples given by humans didn’t work and indeed they sometimes said they hadn’t fully tested it either - that’s the same issue this post has with using ChatGPT. I then used ChatGPT to write the SQL and it worked first time after wasting a lot of time on here and on Google. I didn’t publish the tested answer on this forum as a reply yet but should and that surely is a valid use of ChatGPT, once it’s been tested and it works.