Funny enough that works really great. They wont tell how but even the way you click it, it can kinda tell you are a human.
They might look at “hoover” and “on click / left-click” , among other specs , like when/as your not able to “paste” in some “text-type” fields.
And as all developments ( like i.e. ChatBots/chatgpt ) will inevitable lead to “counterproductive” and/or improved functions/code , like the ever “challenging game” virus/antivirus, then one can hope that eventually you get automatically informed whether you are responding to a “Bot/chatGPT” or whether the/an answer is a C/P , from a i.e GPT() … until then we can only rely on people’s good-manor and honesty , and conscience … so people don’t get “fooled/pissed” by someone who won’t do that little “effort” it is to “mention” ( i got this info from i.e ChatGPT ) <<< very short, but informative )
… And Moderators don’t have to chase ghosts, bots and jerks
@tc23 there is a tremendous difference between asking ChatGPT a technical question, and looking up potential answers/solutions on Google. The former is wrong literally 100% of the time, and the latter shows provenance of source, which anyone can assess for veracity.
Nothing personal to the mods but this is absolutely the wrong decision.
ChatGPT is a very powerful tool. Closing it out of discussion like this isn’t the correct solution. It’s clearly causing a new headache for moderators because people using it without giving it the proper context and doubt that it should be used with, so I get why a new rule about it needs to be place. But I strongly disagree with this gag order on it. It’s a game changing tool for those of us who are hobbyists and sharing that tool shouldn’t be restricted. I’ve done SO MUCH more with ChatGPT than I’ve ever been able to do with help from these forums. I’m self taught in all of this, I don’t have a professional background in this. I’ve always had to ask for help and it’s never been consistent help. ChatGPT has DRASTICLY increased the pace I’m able to go because of the instant answers to questions. Even if it gets it wrong sometimes, you just have to know that so you can use the tool appropriately.
ChatGPT is an awesome tool for hobbyists to have in our toolbelt, instead of a gag order there should be clear guidances on how and when to use ChatGPT to help with home assistant. And what sorts of transparency needs to be involved if you use it as a tool to help someone else.
This gatekeeping of this tool, and the insight it can provide will hurt the Home Assistant community FAR more than the bad advice chat GPT could give when not used correctly.
Use it then, just don’t post it as a solution to other peoples questions. If it produces a correct and tested answer post it with a note that chatgpt helped you and how you tested it.
That’s not what this blogpost/new rule says. This clearly says to not use AI tools to help others, or recommend ChatGPT to others.
If what you’re saying is true, then the rule would be FAR more clear by simply being about posting unsourced / untested solutions to someone’s problem, with the note that ChatGPT is causing an influx of these incorrect untested solutions.
Here ya go mods: I had ChatGPT fix your new rule in a way that’s not a gag order on powerful tools for hobbists.
“All proposed solutions to problems posted in this community must be sourced and tested by the person proposing the solution before being shared with others. Any solutions that have not been properly sourced and tested will be removed and the person who posted them may be subject to disciplinary action. This rule is in place to ensure that all solutions provided in this community are accurate and reliable, and to prevent confusion for those seeking help.”
And it rewrote your blogpost too:
The Importance of Sourced and Tested Solutions in Online Communities
In recent years, the rise of artificial intelligence and natural language processing has led to the creation of powerful tools like ChatGPT, which can generate human-like text based on a given prompt. While these tools can be incredibly useful in many contexts, they can also cause problems when they are used to generate solutions to problems without proper sourcing and testing.
In online communities, where people come together to share knowledge and help each other, the sharing of untested and unsourced solutions can lead to confusion and frustration for those seeking help. When someone posts a solution to a problem that has not been properly sourced or tested, there is a high likelihood that the solution will not work or will even cause more problems. This can lead to users thinking they are doing something wrong, when in reality, the problem lies with the proposed solution.
To combat this problem, we have decided to implement a new rule in our community:
“All proposed solutions to problems posted in this community must be sourced and tested by the person proposing the solution before being shared with others. Any solutions that have not been properly sourced and tested will be removed and the person who posted them may be subject to disciplinary action. This rule is in place to ensure that all solutions provided in this community are accurate and reliable, and to prevent confusion for those seeking help.”
We understand that this may be an inconvenience for some members of our community, but we believe that the benefits of this rule far outweigh the costs. By ensuring that all solutions are properly sourced and tested, we can provide our users with accurate and reliable information that they can trust. This will make our community a more helpful and effective place for everyone.
We hope that you understand and agree with this decision. We are committed to providing our users with the best possible experience, and we believe that this new rule is an important step in achieving that goal.
Please, let us know if you have any questions or concerns about this new rule.
Like with any ChatGPT responce, it’s not perfect, may not be completely correct, but it’s a starting point to a better solution than you came up with on your own.
I mean yes, that’s the rule. If you feel its confusing feel free to suggest a wording change. But that’s the problem and that’s what it seeks to stop.
- If you respond to someone by simply asking the question to ChatGPT and then reposting whatever it says unsourced and untested you’ll be banned.
- Same as above but you attribute it to ChatGPT you’ll be warned to stop doing that
- If you respond by saying “go ask ChatGPT” you’ll be warned to stop doing that. Since that is basically the same as #2 (sharing exactly what ChatGPT says in response but attributed to ChatGPT)
- If you go ask ChatGPT, review the response, test it, correct the flaws and then post a working solution - you’ll probably get a solution checkmark and be celebrated as a great member of the community
Point is, do whatever you want with ChatGPT on your own. Just don’t repost its responses exactly as is on threads here. Or on the discord channel, or in r/HomeAssistant, or anywhere really. Since it gives responses that sound authoritative but don’t work to technical HA questions that’s just not useful behavior in any HA community. But if you personally want to use it for your own research before responding or doing stuff then go right ahead.
I don’t think you read the same blog post I did, because this rule says absolutely nothing about unsourced/untested solutions to people’s problems. It’s a focused rule solely on using AI tools or sharing AI tools as proposed help to other people. It’s a complete gag order on ChatGPT. It’s a powerful tool just like any other, and it is worth being shared. If you use google to grab a solution from somewhere on the internet, and provide it without testing it, that isn’t against the rules. But somehow using ChatGPT, even with every necessary disclosure and attribution, or even recommending it, is ban worthy? That absolutely goes against the Home Assistant community’s nature.
Sharing solutions, tools, and resources is core to the home assistant community, so if I’ve heavily started using ChatGPT to help me build automations, I should be able to recomend that same thing to others. As long as the proper disclaimers and warnings are given to the shortfalls of that tool.
I got a bit surprised by this so looking at your post history you seem to be mostly active in the feature requests part of the Forum.
I didn’t check all topics but most seemed like legit feature requests.
Sure there are a few questions but I couldn’t find any topic where ai would say it was low quality or inconsistent.
I have on the other hand not used chatGPT but it doesn’t sound like it’s giving very consistent answers either, or actually it does sound consistent wrong.
But I have not tested it myself so I’m basing this purely on what I have seen others say.
No, it’s not. How would anyone enforce what you’re misconstruing this as? You posted a response written in your words, its not possible to know how you got to that point. Petro is not going around checking on every community members house to see if they’ve ever used ChatGPT and retroactively banning them lol.
If you take something word-for-word from ChatGPT and repost it as a reply then you will either be banned or warned, depending on whether you attributed it. If you use ChatGPT and then write an actually tested response in your words then you won’t be in any trouble, regardless of how you got there.
I’ll keep going if you want but the mods have already made it pretty clear in this thread. Use ChatGPT all you want, just do not repost things directly from it in response to others.
As for my post history, I don’t often use Forums. I find it inconvenient. I mostly use Discord/Facebook/Reddit when I need help. And across the whole home assistant community it’s not uncommon to get general advice, because no one wants to do my work for me. So I’m told “try templates” but I barely understand even the basics of using templates, or something like that. Most people who try to help are well meaning, but for someone who doesn’t have a background in this stuff I know I’m a bother to anyone who’s trying to help me.
As for ChatGPT, the problem that people have is that it’s consistently not perfect, but that doesn’t mean it’s not helpful. It’s still incredibly helpful as a starting point. Anymore I’m using ChatGPT as the starting point for pretty much all of my automations. it’s interactive so you can give it feedback on something that’s not working and it will correct itself. It really is a game changer tool for those of us who don’t have the background for these things to come naturally to us.
Can you post examples of the questions you posed and correct answers you received from ChatGPT?
All of the ChatGPT examples that I have seen posted in this forum were incorrect and only served to mislead users. It would be interesting to see how you framed your questions in order to get correct answers.
Yes. It is.
it’s no longer allowed to use ChatGPT or other AI systems to help others.
Using an AI system to generate an answer while not providing attribution to AI will result in a ban.
If you use attribution, we will delete your post and issue a warning.
This also means suggesting someone “ask ChatGPT” is not an acceptable response.
These make it VERY clear, it’s a complete gag order on ChatGPT. It’s a hamfisted responce to the problem and restricts helpful tools from being shared.
The trick with ChatGPT is to lean on the CHAT side of it. Interact with it. If the given answer doesn’t work, tell it what error you’re getting and it’ll attempt to correct itself. If the solution it gives you includes variables, tell it what your entity_ids are for those variables, or if it uses a service that doesn’t exist, tell it that. etc. It’s a TOOL. It’s not a magic wand.
I could post the finished product if you want, but I have a feeling you’d want to see the whole conversation, which I don’t have anymore, but I’ll try to remember to followup tonight and post an example for you.
Cool. You’re wrong. I work for Nabu Casa, I chat with @petro regularly, I pointed him to one of the first examples of these ChatGPT responses and asked what we should do about this. He reviewed with other mods and admins and they came up with this rule. I am telling you with 100% certainty what is and isn’t allowed. I’m sorry you find the post at the top confusing.
What isn’t allowed:
- Replying to someone by posting something word for word from ChatGPT
- Replying to someone by saying “go ask ChatGPT”
What is allowed:
- Literally any other use of ChatGPT (researching before responding, researching for your own personal automations, etc.)
- Writing a #community-guides on how you personally use ChatGPT to solve your own Home Automation problems, if you feel you have a useful, reproducible process that others could follow
Just stop. You’re miss understanding the post. That’s it. I’ve said it multiple times in this thread. CommandCentral has repeated it.
It’s not a complete gag order. Move along, as you’re just trying to win a stupid argument now by repeating yourself.
I’m not wrong, I’m reading. The rules are clear as day when you read them.
If that wasn’t the rule they intended, they should fix the rule then.
The rule, as written, is a complete gag order. If that’s not the intention then fix the rule.
It is not. Do not use chatgpt to answer questions for people on any HA media platform. Use it to your hearts content otherwise.