Issues getting a local LLM to tell me temperatures, humidity, etc

I’ve set up a local LLM (ollama) using llama3.2:latest as the model. The voice assistant is all set up, and I have fairly decent responses for turning things on/off and querying their state (eg. is the office light on? Turn it on please)

But, when I try to query things like the current temperature it has issues, either not being able to answer at all, or just complaining about unavailable sensors. Here’s a typical conversation:

How can I assist?

What is the temperature in the office?

The current temperature in the office is unavailable. The sensor data indicates a reading of 21.9°C, but it is not providing real-time updates or notifications.

What temperatures do you know about?

The temperatures I know about are: - 19.27°C in the Living Room - 19.0°C in the Bedroom and Office - 21.9°C in the Office - The current temperature is unavailable for some areas. Please note that the exact device class or sensor name is not specified, but these temperatures are associated with specific devices or sensors within the smart home system.

The office does have a number of different temperature sensors that are exposed to the voice assistant. Is this what is causing the confusion? Tho in the kitchen for example I only have a single temperature sensors and get this:

What is the temperature in the kitchen?
The current temperature in the kitchen is unavailable.

Even tho I can click on the exposed entity and see there is a current temperature reading.

So, not sure what I’ve got misconfigured, etc. Any help would be much appreciated.

So I’ve set a helper to take the three sensors in the office and give me a median value, called it office temperature, and exposed it to the voice assistant.

what is the temperature in the office?

The current temperature in the Office is unavailable.

what is the office temperature?

The current temperature in the Office is 19.1°C.

So getting really inconsistent results.

With LLM in general they are great at summarizing data…

IF they know what the data represents.

I don’t know how you arrange your devices and sensors but for the whole lot of them the same thing applies.

If you were to list them, thier area, domain and alias in a big table and show that table to your grandmother with zero context.

What will your grandmother say?

(cause that’s basically where you’re at.)

In short it’s only as good as your ability to describe what it is.

You can use the alias field in any entity (where you select to expose to assist) and idescrive the thing.

Or you can write a description of what those things are in your prompt yes it’s a lot more detailed than that but if you ASSUME that’s your starting point you won’t give yourself poor expectations

I can get mine to tell me pretty much anything I want now but my user prompt is currently about 8 single space pages and burns out the conversation length in about 3 round trips on a chat (not ideal yes I’m actively optimizing and no i won’t share it it’s quite unique to my installation and has tons of PII data in it.and need a lot of work. What’s in it is not important just it CAN be done. If you work on it.) it’s also taken me nearly three full months of tuning

The art of prompt crafting (a number of industry individuals including myself are trying to avoid the term prompt engineer it’s not accurate) is tellig the llm exactly what you want in as few words as possible.

The problem you describe comes form the llm not having enough context. Your prompt and the resulting data set give the context. As llm models get better and transition into reasoning models you’ll see this get way better quickly - probably later next year as the reasoning models get to be en vogue.

Sorry for the general theory answer but you had a very theoretical question that a lot of people need to hear because they’re misunderstanding what the llm can do.

Remember right now unless you are in a lab building these things what you have access to is basically three steps above an overgrown autocorrect. It pattern matchds and tells stories like a banshee. Don’t expect it to be a wizard.

3 Likes

Thanks so much for the reply, it helped a lot. I think I was expecting too much, so had best lower my expectations a bit. Still, its been a fun experiment so far and I’ll keep messing.

tl;dr, a turnkey solution isn’t there yet. :stuck_out_tongue:

I’m not shy of having a good delve into the nitty gritty, but I’ve also got too many projects and too little time.

Thanks again.

1 Like

No problem. Notice I didn’t say stop? :wink:

When you give the LLM a LOT of context this is what happens…



Room location cones from Bermuda and there is a a report that looks for the AIs people and makes a best guess. - that is called as a template in the prompt
Tasks, I pull an aggregate of a task list for the AI and one for the family and… Yih guessed it - dumped in the prompt. Etc etc.

Build intent or script that answers the data and the tell the llm to summarize that and then it APPEARS as magic. In reality it’s jut a lot of trial and error and things like…

Oooh now i can write a report that tells me the temps and ETA of the food being monitored by the new combustion.Inc thermometer my wife got me…

  • write write write -
    …dump it in the prompt if it’s active
    (core temp <> unknown) aaaand…

now the AI knows when whatever is on the thermometer will be done - I also temporarily told it it was a turkey thermometer. So everyone enjoyed asking Friday when the turkey was going to be done

…every thirty seconds. :poultry_leg:

1 Like