Want to help others? Leave your AI at the door

The PR that added it says it uses GPT3. I’m not really up to date on the differences between ChatGPT and GPT3 but my understanding is that they are different.

This one I can definitively answer - the integration has zero knowledge about the HA instance its connected to. As the docs you linked earlier say:

This conversation agent is unable to control your house. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide OpenAI with the details of your house, which include areas, devices and their states.

So it cannot answer any questions about the specific HA instance the integration is installed in (i.e. your house). It has not been provided that information.


Well the openai conversation certainly knows what entities you have, but cannot control them.

You can ask it “what entities do I have” and get a sensible answer.


Oh. I get it. I misread it. It’s saying it can only query information and informing users that it had to provide OpenAPI with details of your house to do so. It’s not explaining a limitation, it’s letting you know what info is being provided to enable the feature.

I think I was thrown off by the first sentence. I thought it was telling us about a limitation and then explaining why that limitation exists. But its also telling you what it can do after that first sentence.

Thanks for the correction. I guess I was unable to answer that question after all :laughing:

1 Like

Well, this whole thread is scratching so many heads because you all are not clear. Petro is signaling in as being sensible about it, don’t use it to answer questions, don’t offer it untested, but it is in the vane of being absolutely sensible.

But as @JWilson clarified, @frenck clearly shot it down in the update:

At the end of the day, I will support any decision along these lines until it’s proven that AI can be reliable. But others want to push and test to see how AI is maturing and see gaps or questions in the original blog post.

My recommendation, edit or get rid of the blog, create actual rules that the Nabu Casa and Moderator teams agree with, and post it on all communications platforms that are monitored by the HA team.

Be consistent, be clear. The longer this goes people will continue to poke the bear until there are clear rules and use cases - if any.



This is an English only forum.

So you joined the forum just to be abusive? Congratulations, good start!

not sure why people still even bother, but as far as google translate would be right, this was not an abusive post, but a confirming post ?

Well said, useless correct nonsense, is a waste of our lives

seems to be an agreement to the original stance taken

This is actually what a not insignificant number of in-person ‘facts’ or recommendations are anway.

1 Like

My point is that if someone is behaving as a bridge for ChatGPT he have no values for a community.

Also there is dedicated communities to test the ChatGPT capabilities, therefore beside integrating its API into HA there is no reason to use it here.

If someone manage to get useful answer from it, like a way to generate well structured blueprints from natural language he should explain it’s achievement and not just answer others with the bot output.

No one come here to talk with a bot anyway, thus there’s no loss in banning those who are interacting this way


There seem to be a few misconceptions about how systems like ChatGPT work.

It’s true that they scan through monumental numbers of web pages for their source material, but there’s no sense in which they engage with the meaning of any of them. What they do is build a statistical model of what words normally follow other words. That’s it. It makes their output very plausible, but quite unreliable.

When we all get used to the idea, I can see them becoming accepted as a tool for producing first drafts. A person with real expertise will be needed to correct and rewrite. In the meantime moderators are right to be cautious.

There’s a good aticle on the subject here:



i have try,but no response from chatGPT when i ask some haas config questions

I read this and thought of all of you.

Enthusiasm for chatbots has also – bizarrely – made it acceptable to outsource baseline mediocrity to machines instead of humans.

1 Like

That line (emphasis mine) is fantastic! Thanks for posting.

The rest of the article, however, was itself mediocre and not very funny. I thought it might have been an AI creation but frankly I’d expect more from AI. And it didn’t end with “it’s also important to keep in mind…” so obviously it wasn’t from ChatGPT.


I don’t know, seems like it’s doing more then that.

After reading that article, I definitely understand the position of not allowing it in the forum.

It really is. You can go read layman articles on the subject. Those working in the field knows that’s really the core of it. It’s the enormity of the data that makes this work. It’s also the reason you can’t trust it without verification. It’s just a really big parrot.

1 Like

A post was split to a new topic: Why can’t I open this

It works well for aggregating informations but not well when it comes to make assertions, understand rules or implicit content.

It’s only useful to produce text, it failed at producing “ideas”


But maybe some kind of autotests, or scheme of yaml will detect wrong answers of AI in feature?

What do you mean?

Generally the code produced by ChatGPT will throw error.

But if you meant automated modération you will have some false positives issues