The PR that added it says it uses GPT3. I’m not really up to date on the differences between ChatGPT and GPT3 but my understanding is that they are different.
This one I can definitively answer - the integration has zero knowledge about the HA instance its connected to. As the docs you linked earlier say:
This conversation agent is unable to control your house. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide OpenAI with the details of your house, which include areas, devices and their states.
So it cannot answer any questions about the specific HA instance the integration is installed in (i.e. your house). It has not been provided that information.
Oh. I get it. I misread it. It’s saying it can only query information and informing users that it had to provide OpenAPI with details of your house to do so. It’s not explaining a limitation, it’s letting you know what info is being provided to enable the feature.
I think I was thrown off by the first sentence. I thought it was telling us about a limitation and then explaining why that limitation exists. But its also telling you what it can do after that first sentence.
Thanks for the correction. I guess I was unable to answer that question after all
Well, this whole thread is scratching so many heads because you all are not clear. Petro is signaling in as being sensible about it, don’t use it to answer questions, don’t offer it untested, but it is in the vane of being absolutely sensible.
But as @JWilson clarified, @frenck clearly shot it down in the update:
At the end of the day, I will support any decision along these lines until it’s proven that AI can be reliable. But others want to push and test to see how AI is maturing and see gaps or questions in the original blog post.
My recommendation, edit or get rid of the blog, create actual rules that the Nabu Casa and Moderator teams agree with, and post it on all communications platforms that are monitored by the HA team.
Be consistent, be clear. The longer this goes people will continue to poke the bear until there are clear rules and use cases - if any.
My point is that if someone is behaving as a bridge for ChatGPT he have no values for a community.
Also there is dedicated communities to test the ChatGPT capabilities, therefore beside integrating its API into HA there is no reason to use it here.
If someone manage to get useful answer from it, like a way to generate well structured blueprints from natural language he should explain it’s achievement and not just answer others with the bot output.
No one come here to talk with a bot anyway, thus there’s no loss in banning those who are interacting this way
There seem to be a few misconceptions about how systems like ChatGPT work.
It’s true that they scan through monumental numbers of web pages for their source material, but there’s no sense in which they engage with the meaning of any of them. What they do is build a statistical model of what words normally follow other words. That’s it. It makes their output very plausible, but quite unreliable.
When we all get used to the idea, I can see them becoming accepted as a tool for producing first drafts. A person with real expertise will be needed to correct and rewrite. In the meantime moderators are right to be cautious.
That line (emphasis mine) is fantastic! Thanks for posting.
The rest of the article, however, was itself mediocre and not very funny. I thought it might have been an AI creation but frankly I’d expect more from AI. And it didn’t end with “it’s also important to keep in mind…” so obviously it wasn’t from ChatGPT.
It really is. You can go read layman articles on the subject. Those working in the field knows that’s really the core of it. It’s the enormity of the data that makes this work. It’s also the reason you can’t trust it without verification. It’s just a really big parrot.