In that sense, it mimics human behaviour pretty accurately, most notably politicians and company press officers.Or my teenage son if I ask him whether he still has homework to do.
Sounds to me that there is a widely spread “opinion”, and actually to some degree laws against such “human behavior” , i.e a Company have to follow certain “rules” when “selling/commercialize” their products … And in this caseit’s quite obvious that the “authoer/devs” behind ChatGPT, and other similar “services/products” have to RETHINK … Recode, before it gets to common that such services/products is full of unverified “truth” and "fabricated Facts … With no Disclaimers !!! ( clear and obvious such ! )
The feedback is noted, but we don’t need to have strict rules about this. We know it’s a grey line. As are most rules. It’s purposely written in that way. You’re really blowing this out of proportion. If you get caught using ChatGPT posting bogus answers, you’ll face consequences in the form of temporary or permanent suspensions on whatever platform you post the answers on. That’s it. The rule isn’t going to change until the tool is in a better place.
My experience with ChatGPT reinforces that this instruction is a good idea.
I tried playing with it with a yaml I use for ESPHome device.
Gave it the entire yaml file and told to refactor and to make it DRY.
It gave me a third of the yaml file back. I guess it got distracted.
Figured okay it was probably too big for it. I then gave it a largish Lambda with the same command.
It stripped out all the logging statements, which I needed, but it did make some useful suggestions and did seem to describe the code in a literal sense. It inspired a rewrite that made the code more elegant and solid. The code it gave wasn’t particularly useful as was.
Bottom line it’s a good sounding board if you know what you’re doing, or in my case almost know what you’re doing (you hope). If you don’t it’ll just confuse you. If someone is asking a question, and you have to use to try to answer, than neither of you knows what you’re doing.
Unbelievable your PR has been declined. Seems strange they ask for people to update documentation but knocked back for nonsensical reasoning. And we wonder why people can’t be bothered to update stuff.
And then they lock the thread so it cant be discussed, way to go for discouraging the community trying to help.
Maybe update this page so people are not confused with the blog post here at least.
People tried to no avail, hence the confusion will continue
I mean, this is a forum. Rules are enforced by users flagging a post they think is problematic for mods to review. Users generally don’t flag helpful posts.
Not all helpful people willingly break the rules so they can help others.
Out of proportion? Yes, in a way. I’ve spent way too many posts on this, it’s not a big deal but it’s so easy to make it better. I was trying to be helpful. I’ve proposed an edit to the blog post in GitHub, that’s the last I’ll do.
In case someone is curious about what uses ChatGPT might have for HA, I’ve been using it with surprising amounts of success to rewrite old YAML automations to python for AppDaemon. I often need to generate the answer several times, but when it nails it, it’s quite good and saves me a lot of time.
Of course, copypasting raw GPT output here is a terrible idea. And for the people that think it’s not easy to detect, I guess they haven’t played with it enough, the mistakes it makes in obscure fields of programming like YAML for HA are wrong in fairly specific directions, quite easy to spot.
Good luck with that, @nickrout tried and it was rejected
Yeah I was a bit disappointed, but I think @frenck 's response was to do with the fact that my proposed change was too forum centered, whereas the rule does relate to other support channels ie discord, reddit, facebook (god forbid support on fb!) and whatever other support channels exist.
But none of those other channels have updated their rules to include the new ruling. There is an announcement on discord that points to this blog. FB has nothing official, same with reddit, only user provided links to the blog post. All roads seem to point to the blog post.
Maybe instead of outright rejecting it, they could have proposed edits but looking at your PR the wording could be applied to any of these support channels.
A Machine Learning category could be added to the forum, and the ban against answering questions using output from ChatGPT as though it were your own answer would still hold as true.
Allard’s changes don’t even bring anything new that I see in regard to the spirit of the announcement. There is now an OpenAi integration, so it’s not outright rejected, and there can’t possibly be a ban on discussing it.
Not Sure about Allards changes but nick’s ones brought clarification of what’s allowed and were copied from Commandcentral
Sadly it is not just Smart Products.
(But this is getting off topic…)
I know, sorry for the distraction, it just made me think of this thread.
it seems I got a lot of answers from ChatGPT and I didn’t know it