Incorrect sensor data in OpenAI answers

I use the OpenAI Conversation integration to ask ChatGPT questions about sensors, e.g. about air quality. For this purpose, I have adopted the configuration from here, with which the entities are transmitted to OpenAI: OpenAI Home-Assistant Together At last

The 3.5 Turbo model is used. In principle, it works, but some of the answers are incorrect.

Example: If I ask about the air quality in the living room based on CO2 and TVOC values, I get a detailed answer. But sometimes the values are not correct. After asking 2-3 times, I sometimes get the right answer. I also get the same thing when I ask about the status of open doors. ChatGPT then also says that a certain door is open even though it is closed. If I then ask “Are you sure?”, it sometimes corrects itself, but not always.

Does anyone have any idea what the problem is? Hallucinations? Is GPT3.5-Turbo no good?

You are getting exactly what I would expect from an older AI model that is not trained with current HA operations or grounded properly… LLMs are not magic.

1 Like

This is nevertheless strange: the sensor values are sometimes output correctly. At other times they are completely different.

And. Exactly what I would expect from that older llm model.

In commercial installations with the latest models and the tightest controls a full RAG and grounding a mile and a half long AI still gets it wrong sometimes…

Its just an overgrown autocorrect. You’re using a free version of an older model with very little training and grounding at all and even worse it was trained on old data (see the data cutoff point for 3.5 turbo.)

Im suprised it’s getting anything right.

2 Likes