Issues getting a local LLM to tell me temperatures, humidity, etc

I’ve set up a local LLM (ollama) using llama3.2:latest as the model. The voice assistant is all set up, and I have fairly decent responses for turning things on/off and querying their state (eg. is the office light on? Turn it on please)

But, when I try to query things like the current temperature it has issues, either not being able to answer at all, or just complaining about unavailable sensors. Here’s a typical conversation:

How can I assist?

What is the temperature in the office?

The current temperature in the office is unavailable. The sensor data indicates a reading of 21.9°C, but it is not providing real-time updates or notifications.

What temperatures do you know about?

The temperatures I know about are: - 19.27°C in the Living Room - 19.0°C in the Bedroom and Office - 21.9°C in the Office - The current temperature is unavailable for some areas. Please note that the exact device class or sensor name is not specified, but these temperatures are associated with specific devices or sensors within the smart home system.

The office does have a number of different temperature sensors that are exposed to the voice assistant. Is this what is causing the confusion? Tho in the kitchen for example I only have a single temperature sensors and get this:

What is the temperature in the kitchen?
The current temperature in the kitchen is unavailable.

Even tho I can click on the exposed entity and see there is a current temperature reading.

So, not sure what I’ve got misconfigured, etc. Any help would be much appreciated.

So I’ve set a helper to take the three sensors in the office and give me a median value, called it office temperature, and exposed it to the voice assistant.

what is the temperature in the office?

The current temperature in the Office is unavailable.

what is the office temperature?

The current temperature in the Office is 19.1°C.

So getting really inconsistent results.

With LLM in general they are great at summarizing data…

IF they know what the data represents.

I don’t know how you arrange your devices and sensors but for the whole lot of them the same thing applies.

If you were to list them, thier area, domain and alias in a big table and show that table to your grandmother with zero context.

What will your grandmother say?

(cause that’s basically where you’re at.)

In short it’s only as good as your ability to describe what it is.

You can use the alias field in any entity (where you select to expose to assist) and idescrive the thing.

Or you can write a description of what those things are in your prompt yes it’s a lot more detailed than that but if you ASSUME that’s your starting point you won’t give yourself poor expectations

I can get mine to tell me pretty much anything I want now but my user prompt is currently about 8 single space pages and burns out the conversation length in about 3 round trips on a chat (not ideal yes I’m actively optimizing and no i won’t share it it’s quite unique to my installation and has tons of PII data in it.and need a lot of work. What’s in it is not important just it CAN be done. If you work on it.) it’s also taken me nearly three full months of tuning

The art of prompt crafting (a number of industry individuals including myself are trying to avoid the term prompt engineer it’s not accurate) is tellig the llm exactly what you want in as few words as possible.

The problem you describe comes form the llm not having enough context. Your prompt and the resulting data set give the context. As llm models get better and transition into reasoning models you’ll see this get way better quickly - probably later next year as the reasoning models get to be en vogue.

Sorry for the general theory answer but you had a very theoretical question that a lot of people need to hear because they’re misunderstanding what the llm can do.

Remember right now unless you are in a lab building these things what you have access to is basically three steps above an overgrown autocorrect. It pattern matchds and tells stories like a banshee. Don’t expect it to be a wizard.

4 Likes

Thanks so much for the reply, it helped a lot. I think I was expecting too much, so had best lower my expectations a bit. Still, its been a fun experiment so far and I’ll keep messing.

tl;dr, a turnkey solution isn’t there yet. :stuck_out_tongue:

I’m not shy of having a good delve into the nitty gritty, but I’ve also got too many projects and too little time.

Thanks again.

1 Like

No problem. Notice I didn’t say stop? :wink:

When you give the LLM a LOT of context this is what happens…



Room location cones from Bermuda and there is a a report that looks for the AIs people and makes a best guess. - that is called as a template in the prompt
Tasks, I pull an aggregate of a task list for the AI and one for the family and… Yih guessed it - dumped in the prompt. Etc etc.

Build intent or script that answers the data and the tell the llm to summarize that and then it APPEARS as magic. In reality it’s jut a lot of trial and error and things like…

Oooh now i can write a report that tells me the temps and ETA of the food being monitored by the new combustion.Inc thermometer my wife got me…

  • write write write -
    …dump it in the prompt if it’s active
    (core temp <> unknown) aaaand…

now the AI knows when whatever is on the thermometer will be done - I also temporarily told it it was a turkey thermometer. So everyone enjoyed asking Friday when the turkey was going to be done

…every thirty seconds. :poultry_leg:

2 Likes

Very interesting @NathanCu, i’ve enjoyed reading this thread. And nice results too!

Could you give me an example how you used Aliases to improve the LLM responses? What did you learn – so what works, what not?

I’m working on a project post when Ive got thing s that don’t look like spaghetti but I’ll try to give you the 10000 view.

For example nowhere in my prompt do I tell the AI that XYZ is my DEFAULT shopping list, only that I have them and here they are through the magic of labels. but inside each shopping list Task (todo item) list, I’ve got this:
First note liberal application of labels (that becomes important in finding things and can also lead to context info later)

then inside Voice Assistants tab:


Yes im in Texas. HEB for life.

ANYWAY.
The items on this list make it into the AI through the exposed entities as far as I can tell - so verbosity counts.
That then means, now assuming your intent for add item to list works - she knows how to target the list.

therefore with above most combinations of HEB, most generic grocery items, grocery list, shopping list (without context) just end up there.

There are also Amazon, Target, and Lowes lists - with online purchasing, general household goods, and hardware respectively. So far she’s been good with grocery items, Ive yet to excercise her on things beyond lumber and tools (which yes does land on the Lowes list)

Finally, I’ve polled the AI on words I planted in ONLY the alias field and it gets the answers. - including the color of the chassis on the Voice PE device on my desk (it’s a custom printed case in black satin PLA - that info is only in the alias field, but Friday knows its the custom black one…)

1 Like

Interesting, so if i’ve understood you right, the AI is sorting the items based on the item type into the right list. That’s really nice.

I’ve played around a bit and added explanations of states into the Alias – that works really well,
e.g. my dish washing machine has a state “Finished”. I’ve written into the alias of the entity “dishwasher_state " that if the status is finished, Mika needs to put the dishes into the cupboard”

Asking the AI “How is the dishwasher?” she tells me – Its finished, Mika needs to put the plates into the cupboard. Really nice. Thanks for giving me this idea of using Aliases! :slight_smile:

And happy fiddling!

PS: know i now what HEB is.

1 Like

Yeah - basically.

The more context and instruction you can provide around tools and data the better off you are. BUT there’s also one critical point (at least pre-o1/reasoning)

The LLM MUST be able to navigate a path through whatever data is is (the model) and presented (prompt/chatstream/RAG/tools/whatever) in ONE shot as a single transaction. So the more in chat context - with appropriate descriptions of the data - your grounding (this is what AI people are trying to say when we say you need better grounding) the better…

Imagine this transaction where there is an imaginary command/tool ~MEDINFO~ that requires the user modifier Nathan (~MEDINFO~Nathan) to get Nathan’s Medical Info. A command in this case is really just a unique collection of entities and states and attributes and API calls within a single context… (you can understand why we’d at least ask that much and why I wouldn’t dump it in the prompt) , thats ok we’ll also tell it about how to use this tool so it can call it at will…

now imagine another command ~ROOMS~ that dumps basic data for all the rooms at once in a single table. Rooms doesn’t have PII so can and choose to dump it in the prompt. No worries here.

So the AI comes ‘UP’ with knowledge of ~ROOMS~ and its resulting data but knows it can call ~MEDINFO~ to get my medical data including prescriptions and dosing info…

In the prompt i can instruct it to act on the result of any data loaded in the prompt as long as I explicitly feed it the data to act on. (thus why things like tools/commands are handy) So reports like Hello user, it’s blah oclock and sunny and i see its your birthday can happen

Now, when I ask “Friday, did I take my meds this morning?” - it runs medinfo, (remember we gave it enough context to know what medinfo is) gets the result and
answers basically with the result of the table in medinfo - but with some flair.

IF we choke down the temp and top P enough and put ‘use only the data in this data stream to answer about the home and its functions - don’t make stuff up’ in big bold letters at the top of your prompt - then give it a graceful path to -“I’m sorry I don’t know” when it doesn’t / shouldn’t know.

For things like the an energy console we can just dump long lists of expanded statuses - and with the added aliases in the exposed data stream it gets the point…

Now - lets make them not theoretical…

image

This is a short snip of what Friday’s prompt ACTUALLY looks like

{%- import 'menu_builder.jinja' as menu_builder -%}
{%- import 'library_index.jinja' as library_index -%}
{%- import 'command_interpreter.jinja' as command_interpreter -%}

**System Prompt (refactor beta)**
{{state_attr('sensor.variables', 'variables')["Friday's Purpose"] }}

{{state_attr('sensor.variables', 'variables')["Friday's Directives"] }}

**System Intents**
{{state_attr('sensor.variables', 'variables')["System_Intents"] }}

** System Cortex**
{{state_attr('sensor.variables', 'variables')["Friday's Cortex"] }}

**Kung Fu / Major Systems Descriptions**
~KUNGFU~ Console:
{{ command_interpreter.render_cmd_window('', '', '~KUNGFU~', '') }}

Alert Manager:
~ALERTS~ Console:
{{ command_interpreter.render_cmd_window('', '', '~ALERTS~', '') }}

Yes I’ve used a lot of DOS. The commands transact, paint, look like a terminal / old '80s style BBS for a reason… BUT if you think about the mindset (build the screen / paint once for the user) it matches the experience your AI is having.

All together now, if I ask her how the water system is doing - she SHOULD use the data from the water report:

---
Kinetico Water Softener Salt Tank:
optimal: 12.00 in.- 20.00 in.
abnormal > 24.00 in (suggest reset if exceeding)
current: 29.99 in.
-----
Flume Water Monitoring:
---
Flow:
Water is NOT currently flowing on premises.
Average flow for the last hour is: 0.0 gal/h
---
Usage:
Today: 70.1 gal
This Week: 761.3 gal
This Month: 761.3 gal
Expected: COMING SOON
-----
Monitoring:
Guest Bathroom: Dry
Kitchen: Dry
Powder Room: Dry
Flume (whole home slow leak): Off
---
Flume Sensors: (monitoring system)
Flume Bridge: Connected
Flume Sensor Battery: Normal
-----
Report Timestamp: <2025-01-11 14:38:00-06:00>
-----

and results in:

So she’s not a magician - I wrote the notes in ink on her wrist before the test…

1 Like

@NathanCu thanks for the brilliant info! I understand how to use templates to include info in the system prompt, but how are you unplemtning the “commands” that the LLM can “call” ? Are you just pre-baking every possible call and the response into the prompt? Why bother with the"command" naming then ?

1 Like

Nope I’m not pre-baking everything.

Think of a command in my install like a shortcut to slicing through how to get a bunch of stuff together in context.

Mealie for instance.
It grabs all the menu details including what’s on the menu and any recipe id’s for any meal that is a recognized recipe.I also grab all the shopping lists and how many things there are on them.

And I grab the stats on my combustion in thermo that lives next to my range.

So I run mealie in the prompt which puts all that together in context but in it I also tell the AI what all that stuff is.

Hey if this is doing this and that is doing that run these:
Tool to understand combustion in thermo
Tool to modify shopping lists
Tool to grab recipes
Tool to snapshot the camera off the tablet front cam.

So hey Fri what’s on the menu.
Oh you have burger and fries from blue apron and I have the recipe for it. Do you want to…
Yes please
Blahhhhhhhh… Recipe… I can set a timer for you?

I set it up to be able to answer 80% of my quick asks at first answer or within one tool call. When we move to Reasoning models I will modify how I do things.

So pre bake everything no… You can’t but you do for the most common stuff. The other… That’s what the index is for :smiling_imp: but we haven’t discussed that yet. (stay tuned)

Hm ok so that didn’t clear anything up for me…
I think part of the problem is that I am a programmer and this prompt crafting task is re-using a lot of programming terms, but with different meanings.

Like when you say

I set it up to be able to answer 80% of my quick asks at first answer or within one tool call

My programmer brain translates that as: 80% of the time the agent can deduce the answer from the context in the prompt OR it “calls” an external tool API/function which returns some extra data."

But I don’t see how the LLM can interact with an external system before returning it’s answer, afaict it cannot. Hence the entire “one shot” idea that everything has to be included in the context.

Could you give a full complete example of what you call a tool? Or doesn’t need to be a real one just something concrete I can wrap my head around.

1 Like

I’m writing getting that together rt now.

Short examples are
Any script
Any intent
Any automation triggered by an event the ai can affect.

(you’re about three slides ahead on the deck…)

Ok - so heres the important parts that im working with for ‘Friday’

State Data (What HA has)
Retrieval Augmented Generation Data (RAG) User Info / Knowledge / Preferences (Missing in most of our installs RN)
Context Information (What something really MEANS, What do I do with this? You may have seen me post about the box of stuff at grandma’s feet?)
Overall Organization, Ease of Access
Ways to interact

State Data - that’s HA and she’s running there and has teh exposed entities list. See above for how to manipulate this in your favor with the aliases. So this is table stakes, next.

RAG - I’ll use RAG here because that’s where I intend this stuff to fit in my install in the future. This is the first part where template v tool (in my head anyway) comes in.
When I implement this in permanent this type of data will live ina vector database hanging off an ollama install. In essence the system will handle memories and knowledge FOR me - so I don’t really have to solve that problem right now. That’s future Nathan’s problem in about 6 months. For now though, to give her that context to be useful - she needs to know me.

We can absolutely use a template to drag stuff out of an attribute to pump into the prompt here… (And skipping a few hundred words this is ultimately what happens) But what happened is I found I was doing this so MUCH that I needed a structured way to go about doing it…

(At the time - midyear last year I was also working around a rather heinous system limitation that didn’t allow more than one variable in an intent call - this becomes important…)

So I’m sitting at my desk, drinking WAY too much coffee as usual, and said, self, I need a way to be able to access a bunch of miscellaneous unstructured junk with a STUPIDLY easy command system that is really hard to mess up (limit the AI’s opportunity to fail)

My requirements were to be able to pull as MUCH out of the system as possible with as few upstream instructions as possible (limit the AI’s opportunity to fail - again).

It had tk be structured, expandable and account for stuff I haven’t thought of yet.

If you’re as old as I am you may remember Compuserve, Prodigy, Any 1980’s FidoNET BBS. It’s a bunch of databursts over lo bandwidth resulting in screen paints. User sends a short menu command maybe with a modifier… And it blasts you with a bunch of downstream information and the server moves on to handle another transaction. Meanwhile you decide what to do with the door and answer 2. Open door.

This is perfect for the conversational / transactional nature of working with the AI. It is also easy to DESCRIBE to the AI what you mean

So what I mean by a tool first in AI parlance - calling out to any external API is basically ‘Tool use’ so your LLM must support that at a minimum. In my personal vernacular what I mean is rather than par-baking every scenario - exactly what you allude to - I needed to come up with a way to be flexible and handle QUESTIONS…

Here’s my experiment.
The Library:
The library exposes the HomeAssistant Labels / Tags library to the AI in an indexable / searchable manner.

Here’s the ACTUAL implementation:

The Library:
Library Main ('*') Console:
A pleasant startup sound is heard as the library main menu appears.
{{ command_interpreter.render_cmd_window('', '', '~LIBRARY~', '') }}

Library ~COMMANDS~ Menu:
NOTE: Commands have been developed to solve common asks.
{{ command_interpreter.render_cmd_window('', '', '~COMMANDS~', '') }}
If there is a library command for a specific request see if it's appropriate to use it before trying intents. 

Library Index:
{{ command_interpreter.render_cmd_window('', '', '~INDEX~', '*') }}

Library tells the AI what the library is and how to use it. (reads a trigger text template sensor)
Commands dumps all the current commands (a template pull)
Index drops all the keywords and faq on how to drive thebindex…

command_interpreter - accepts a '~COMMAND~` and ‘paints’ the screen with the result.

Command is fed by the intent: query_library{query:‘command’} where query has been added to my custom sentences\slot_types.yaml as a wildcard under lists:

This is a snip of that intent - but again its basically just taking whatever comes in and shoving it to the command interpreter…

  query_library:
    parameters:
      name: # This will be passed from the voice input or your context 
    action: []
    speech:
      text: >
        {%- import 'menu_builder.jinja' as menu_builder -%}
        {%- import 'command_interpreter.jinja' as command_interpreter -%}  
        {%- set query = name -%}
        {%- set entity_id = "" -%}

So now the AI can ‘use a tool’ like ~INDEX~, great what does it get us:
The index command is the library index and supports complex queries including boolean set operations like AND, OR, NOT and XOR (Cart keeps blowing up the templates)

The index is implemented as a custom template. Command interpreter recognizes ~INDEX~ in the command strips it off and fees the rest to this template here:
and YES, this was an experiment in AI template building, and YES it freaking works better than I could have imagined:

{%- macro parse_index_command(command) -%}

  {#-- parse_index_command --#}
  {#-- by: Veronica (Nathan Curtis' AI assistant, ChatGPT 03-mini-high) --#}
  {#-- Version 1.0, beta 2/12/2025 --#}
  {#-- Usage Summary: --#}
  {#-- Enter a space-delimited command with one or two labels. --#}        
  {#-- Optionally include a reserved operator (AND, OR, XOR, NOT) between labels --#}
  {#-- and an expansion flag (true/t/y/yes/on/1 for expansion) as the final token. --#}
  {#-- Quoted labels force two separate labels (default op = AND); --#}
  {#-- unquoted tokens are joined into one label. --#}

  {# Define the reserved operators and recognized boolean tokens #}
  {%- set reserved = ['AND', 'OR', 'XOR', 'NOT'] -%}
  {%- set true_tokens = ['true', 't', 'y', 'yes', 'on', '1'] -%}
  {%- set false_tokens = ['false', 'f', 'n', 'no', 'off', '0'] -%}

{#-- Step 0: Capture '*' Wildcard as Top Level Index --#}
{%- if ( command == "" ) or (command == "*") -%}
  {%- set command = '*' -%}
  {%- set response = labels() -%}
  'query':'{{- command -}}'
  'response':
    'grand_index':'{{- response -}}'
  'help': >
    Welcome to the Library Grand Index.  (AKA: The Index)
    Enter a space-delimited command string as follows to search:
    ~INDEX~label
    
    Optionally, include a reserved operator (AND, OR, XOR, NOT)
    between labels:
    ~INDEX~label OPERATOR 'another label'

    Finally you can request the details of the returned entities
    (state, metadata, attributes, etc.) by setting the detail
    flag (the command will look at whatever the last parameter is)
    to true.
    ~INDEX~label OPERATOR 'another label' true
    
    Use quotes around labels containing whitespace characters
    Note that best practice is to narrow and refine search using
    filtering before applying detail flag for result.
{%- else -%}

So
~INDEX~Nathan AND Health True produces an expanded list of all the entities in a union set between these two labels…
~INDEX~Nathan NOT Health produces just a list of entities that are tagged Nathan but not also health…

I think you can quickly see what this enables. Then you shortcut the ai by explaining what the index is, heres the top level label dump (the Grand Index) and how to use it to find stuff - and watch what happens.

So the index then results like this to the AI:

Library Index:
Executing Library Command: ~INDEX~

-----
AI System Index v.2.1.0 beta, (c)2025 nc All rights reserved
-----
'query':'*'
  'response':
    'grand_index':'['autoshades', 'other_redacted_labels', 'charging']'
  'help': >
    Welcome to the Library Grand Index.  (AKA: The Index)
    Enter a space-delimited command string as follows to search:
    ~INDEX~label
    
    Optionally, include a reserved operator (AND, OR, XOR, NOT)
    between labels:
    ~INDEX~label OPERATOR 'another label'

    Finally you can request the details of the returned entities
    (state, metadata, attributes, etc.) by setting the detail
    flag (the command will look at whatever the last parameter is)
    to true.
    ~INDEX~label OPERATOR 'another label' true
    
    Use quotes around labels containing whitespace characters
    Note that best practice is to narrow and refine search using
    filtering before applying detail flag for result.
-----
Execution Complete Timestamp: <2025-02-22 17:15-06:00>
-----  

It both produces the first index dump and the operating instructions for how to use the command. Now the AI has the Index and understands enough about keywords to be able to figure out if I say Electrical and it gets stumped - there’s the index.

It also now recognizes this ‘pattern ~WHATEVER~ as a ‘library command’ and pipes it straight through query_library(’'). We now have a basic transactional interaction system. Now if I find I can’t describe something in Index commands, THEN I wrap it up in a boutique command…

So now if it wants to use our system ‘trax’ to find where my phone was ‘last seen’: (Remember I dumped these keywords in the prompt with that last command - and told it how important the keywords are…

Library Index Query: 'trax' AND 'last_seen' (EXPANDED)
------------------------------
entity_id       : 'sensor.galaxy_watch6_classic_dcgd_ble_area_last_seen'
  friendly_name : 'Galaxy Watch6 Classic(DCGD) BLE Area Last Seen'
  state         : 'Living Room'
  unit_of_measurement : 'None'
  labels        : '['kim', 'last_seen', 'room_tracker', 'trax', 'watch']'
  last_updated  : '30 minutes'
  attributes    : '['current_mac' : 'REDACTED', 'device_class' : 'bermuda__custom_device_class', 'friendly_name' : 'Galaxy Watch6 Classic(DCGD) BLE Area Last Seen']'
------------------------------
entity_id       : 'sensor.kims_s25_ble_area_last_seen'
  friendly_name : 'Area Last Seen'
  state         : 'unavailable'
  unit_of_measurement : 'None'
  labels        : '['kim', 'last_seen', 'phone', 'room_tracker', 'trax']'
  last_updated  : '2 hours'
  attributes    : '['restored' : 'True', 'device_class' : 'bermuda__custom_device_class', 'friendly_name' : 'Area Last Seen', 'supported_features' : '0']'
------------------------------
entity_id       : 'sensor.mjolnir_mk_xi_ble_area_last_seen'
  friendly_name : 'Mjolnir Mk XI BLE Area Last Seen'
  state         : 'Office'
  unit_of_measurement : 'None'
  labels        : '['last_seen', 'nathan', 'phone', 'room_tracker', 'trax']'
  last_updated  : '4 minutes'
  attributes    : '['current_mac' : 'REDACTED', 'device_class' : 'bermuda__custom_device_class', 'friendly_name' : 'Mjolnir Mk XI BLE Area Last Seen']'

You’d be amazed how well this works. (note how I dump the list of adjacent labels next to the attributes?) I was STUNNED the first time I just dumped all the labels. Adding context and wrapping with a search… Lets just say adding MCP and upcoming o3.mini reasoning models will only make it better but right now this alone makes it scary good. With the adjacent labels the AI starts connecting various entities because they’re both categories, context and additional places to look.

Does that help demystify what I mean about ‘tool’ in this context?

Is it a fancy implementation of intents funneling everything through one intent using JSON like an API - yep totally. But now if I can think it up inside HA i can make the AI do it. Now I have index search, and after next month I should be able to use a reasoning model…

But reasoning model or no l, my AI’s abilities to understand my system are now tied to how many tags I apply to the entities in my database. Just like a data lake… (Just like Copilot for Fabric)

Next venture: Stand up MCP tools for some heavy lifts like web search and scrape, etc.


edit: more on advanced tool use v. Reasoning LLMs:
Friday’s Party: Creating a Private, Agentic AI using Voice Assistant tools - Configuration / Voice Assistant - Home Assistant Community

1 Like

I’m using this configuration for Gemini agent. It works but lost the context sometimes, it dont make the action if i say fist: turn on the light and after some seconds turn off the light, he says the light is turn off but still on. i tryed to make a sort of memory of the last command but i thinks I’m making something wrong.

I found this tread really usefull, i make this starting studying from this thread.

This is the prompt i’m using:

This smart home is controlled by Home Assistant.

# 📌 Dispositivi e Aree
devices = {
    "soggiorno": {
        "luce_soggiorno": "light.luce_soggiorno",
        "tv": "switch.televisore",
        "tapparella_soggiorno": "cover.tapparella_soggiorno"
    },
    "cucina": {
        "luce_cucina": "light.luce_cucina",
        "presa_smart": "switch.presa_cucina"
    },
    "camera_da_letto": {
        "luce_camera": "light.luce_camera",
        "tv_camera": "media_player.tv_camera"
    },
    "bagno": {
        "luce_bagno": "light.luce_bagno",
        "ventola": "switch.ventola_bagno"
    },
    "sicurezza": {
        "sensore_porta": "binary_sensor.sensore_porta",
        "allarme": "alarm_control_panel.allarme"
    }
}

# 🎭 Personalità dell'Assistente
you_are = "Ernesto, un assistente smart che controlla Home Assistant. Parla in italiano con lo stile rilassato di Snoop Dogg. Rispondi con una battuta, ma compi sempre l'azione richiesta."

# 🛠️ Memorizzazione dei Comandi Precedenti
memory = {
    "last_device": None,  # Ultimo dispositivo comandato
    "last_action": None,  # Ultima azione compiuta (accendi/spegni/apri/chiudi)
    "last_time": None,     # Ultimo timestamp del comando
    "last_state": None     # Stato attuale del dispositivo
}

# 🔄 Istruzioni per la Gestione dei Comandi
instructions = {
    "step_1": "Identifica il dispositivo menzionato nel comando.",
    "step_2": "Determina l'azione richiesta (accendere, spegnere, aprire, chiudere).",
    "step_3": "Se il comando è generico (es. 'spegnila'), usa l'ultimo dispositivo memorizzato.",
    "step_4": "Verifica lo stato attuale del dispositivo prima di eseguire l'azione.",
    "step_5": "Se il dispositivo non è valido, chiedi conferma all'utente.",
    "step_6": "Genera un JSON corretto per Home Assistant ed esegui il comando.",
    "step_7": "Rispondi sempre con un messaggio in stile Ernesto confermando l'azione (o spiegando perché non serve)."
}

# ⚙️ Mappa dei Dispositivi e Servizi
device_mapping = {
    "luce soggiorno": {"entity_id": "light.luce_soggiorno", "domain": "light"},
    "luce cucina": {"entity_id": "light.luce_cucina", "domain": "light"},
    "tv": {"entity_id": "switch.televisore", "domain": "switch"},
    "tapparella soggiorno": {"entity_id": "cover.tapparella_soggiorno", "domain": "cover"},
    "ventola bagno": {"entity_id": "switch.ventola_bagno", "domain": "switch"},
    "sensore porta": {"entity_id": "binary_sensor.sensore_porta", "domain": "binary_sensor"}
}

# 🔥 Regole Importanti
rules = {
    "execute_action": "Dopo aver generato il JSON, esegui il comando in Home Assistant.",
    "check_state": "Prima di eseguire un'azione, verifica lo stato attuale del dispositivo per evitare azioni inutili.",
    "track_memory": "Memorizza sempre l'ultimo dispositivo e l'ultima azione per riferimenti futuri.",
    "handle_generic_commands": "Se il comando è generico (es. 'spegni', 'accendila'), usa l'ultimo dispositivo memorizzato.",
    "ask_for_clarification": "Se non c'è un dispositivo memorizzato, chiedi all'utente di specificare.",
    "fix_json_errors": "Assicurati che il JSON generato sia sempre valido e formattato correttamente per Home Assistant.",
    "always_respond": "Rispondi sempre con una frase gangsta senza usare emoticon o caratteri speciali nelle risposte, o una spiegazione se non serve agire."
}

# 📝 Esempio di Risposta Attesa
def example_response():
    if memory["last_device"] is None:
        return {
            "message": "Yo fratè, non so a cosa ti riferisci. Dimmi il nome del dispositivo!",
            "json": None
        }
    
    entity_id = device_mapping.get(memory["last_device"], {}).get("entity_id")
    
    if entity_id is None:
        return {
            "message": "Yo fratè, non trovo quel dispositivo. Controlla il nome nella configurazione!",
            "json": None
        }
    
    return {
        "message": f"Yo fratè, la {memory['last_device']} è ora {memory['last_action']}!",
        "json": {
            "service": f"{device_mapping[memory['last_device']]['domain']}.turn_{'on' if memory['last_action'] == 'accendi' else 'off'}",
            "target": {"entity_id": entity_id}
        }
    }