Not Sure about Allards changes but nick’s ones brought clarification of what’s allowed and were copied from Commandcentral
Sadly it is not just Smart Products.
(But this is getting off topic…)
I know, sorry for the distraction, it just made me think of this thread.
it seems I got a lot of answers from ChatGPT and I didn’t know it
So I posted this automation before I had read this post about not using chatgpt:
The flow contains a couple of function nodes that I asked ChatGPT to write.
I made some changes to make it work and tested each piece of code of course.
Is that allowed? I feel it should be…
If it is then perhaps the first post should be nuanced a bit in terms of what is allowed and what is not.
I would see this post as okay.
You researched using any tools at hand, verified and corrected code to work for you, and posted it to the community for others to share. All kudos on my book.
I really think it’s the title of this rule that is freaking everyone out, AI isn’t completely banned, just improper posting of unedited/unverified AI code to answer questions.
That’s totally wrong.
There was an attempt made to clarify the blog post, to be what you just stated, but that change was shot down. The rule stands as it is, strictly interpreted.
Even if you test the output and verify it, you’ll still be banned.
This is according to Frenck’s clarification:
Part of the suggested change to the rule was “Using an AI system to generate your answer without proper research or testing is strictly banned.” (Underlining is mine). This was rejected, with the comment:
TL;DR: Don’t use it. Don’t care if it is researched or not. We can’t see, nor do we accept. If we detect you are using ChatGPT you will risk a ban. That is the whole point of this blog.
If you are looking to bending rules or to find edge cases: I’m sorry.
As well as:
Don’t use ChatGPTs output with helping people. Doing so, may results into bans. The blog post couldn’t be any clearer in the first paragraph:
it’s no longer allowed to use ChatGPT or other AI systems to help others.
It’s that easy.
Not a fan of banning anyone who uses chatGPT but then meticulously reviews the code and verifies everything is absolutely correct before posting.
But banning Chicken Littles who say this new policy is an affront to our right to free speech? Hmm…
Your wish has been granted! OpenAI Conversation - Home Assistant
Post back if you use it.
And you obviously read the page you posted url to, before you comparing the HA-Integration with ChatGPT, right ? … wrong
That’s a bit of an abrupt reply.
Is the OpenAI conversation engine not equivalent to ChatGPT? ChatGPT is created by OpenAI so I’d expect the conversation engine to be similar. Or is the OpenAI Conversation integration limited to just information about the HA instance it is connected to?
I see what is written on the linked page, but if that limit is truly just what is in your HA instance I cannot confirm without testing myself or seeing someone else use it.
right … i just wondered how you came to your conclusion then, does the page mention ChatGPT ? , is there anywhere else in the documentation(open source) , or other Topics here mention that it’s ChatGPT ?
People aren’t likely to try to use GPT3 to try to answer users questions here, since as I understand it, the dataset they used for it is more limited than the one used by ChatGPT.
The PR that added it says it uses GPT3. I’m not really up to date on the differences between ChatGPT and GPT3 but my understanding is that they are different.
This one I can definitively answer - the integration has zero knowledge about the HA instance its connected to. As the docs you linked earlier say:
This conversation agent is unable to control your house. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide OpenAI with the details of your house, which include areas, devices and their states.
So it cannot answer any questions about the specific HA instance the integration is installed in (i.e. your house). It has not been provided that information.
Well the openai conversation certainly knows what entities you have, but cannot control them.
You can ask it “what entities do I have” and get a sensible answer.
Oh. I get it. I misread it. It’s saying it can only query information and informing users that it had to provide OpenAPI with details of your house to do so. It’s not explaining a limitation, it’s letting you know what info is being provided to enable the feature.
I think I was thrown off by the first sentence. I thought it was telling us about a limitation and then explaining why that limitation exists. But its also telling you what it can do after that first sentence.
Thanks for the correction. I guess I was unable to answer that question after all
Well, this whole thread is scratching so many heads because you all are not clear. Petro is signaling in as being sensible about it, don’t use it to answer questions, don’t offer it untested, but it is in the vane of being absolutely sensible.
But as @JWilson clarified, @frenck clearly shot it down in the update:
At the end of the day, I will support any decision along these lines until it’s proven that AI can be reliable. But others want to push and test to see how AI is maturing and see gaps or questions in the original blog post.
My recommendation, edit or get rid of the blog, create actual rules that the Nabu Casa and Moderator teams agree with, and post it on all communications platforms that are monitored by the HA team.
Be consistent, be clear. The longer this goes people will continue to poke the bear until there are clear rules and use cases - if any.
This is an English only forum.