About making inexpensive models smarter by providing tools and context. (local models, gpt-5-mini, gpt-4.1-mini, gpt-4o-mini ...)

Or… You need. To be willing to let the llm burn some toks on exploration. Then write it down somewhere. (next up when I get around to posting file cr… Er DojoTools file.). I need to write a volume formatting tool first (almost done)

Short version. You’re right m. Yih still have to write a bunch of well crafted hints. And new skills will need reinforcement… Where to store it is the question.

What im finding is let the tools guide. What I mean by this - you’ll find soon enough you need a tool to explore entity details that exposes… (im in the middle of writing it for this very reason) common attributes (including last changed) and gives bits of metadata like should this ent actually have hist stats so you can do more targeted queries. Hint this tool works really well at the output end of an index…

Some of the tools I’m augmenting the error messages (my grocy script - 405 error reminds the llm no you’re doin it wrong, and you can watch it instantly retry - If you use a reasoning model.)

Oh as to when was last movement. Expose last updated then tell the llm it can interpret last updated as last whatever for appropriate ents… Done. As long as it understands what the ent is (you did put in your metadata? :))

Ill post current DojoTools Inspect when I get home from the movies. It’s Clobberin’ Time…

1 Like

Exposing last changed is a nice idea.
Will definitely add that to the returned objects in the Entity Index tool. :slightly_smiling_face:

And yes, letting the tool learn from error or hints that get added to the script response is something I already learned from your scripts and already use. :+1:

here you go: New tools

1 Like

Thanks, will take a look later. :slight_smile:

One more thing that I currently miss: The possibility to get the aliases for the entities in a script.

If Assist uses my Entity Index to find stuff (instead of looking it up in the data provided from HA) it doesn’t always seem to look up the other entity details it knows.

So it often doesn’t use the descriptions I placed in the entity aliases to get the bigger picture about the question.

I’m currently trying to get that info from the websocket API and map that into my response.
Sadly this isn’t a simple all-in-one-script solution then.
Needs a Node-Red flow, python script or something similar that gets called by the script and sends back the result I guess.

1 Like

It’s not available anywhere I can see it. A way to get that and the exposed_enties set would be amazing

You are right, it isn’t in there even that the content otherwise reflects the core.entity_registry file.
Seems like the only possibility is to parse this file then (dirty, dirty …) :confounded:

1 Like

there you go How to find entity_id using the Voice Alias using templates? - #10 by joie96

as you said, not the cleanest solution, but has been working for a while

2 Likes

Thank you very much. :slight_smile:

I just (in the moment I received the notification about your message) finished a Node-Red flow that

  • reads the file
  • saves an json file with the relevant keys/values to a different location below config/www/…
  • which is then read by a restful sensor into an attribute (because an attribute can hold more than 250 chars, which is the limit of the state).

But very cool to see that others have been tinkering with this as well. :+1:

1 Like

I’ll write it into my indexer. Of course I just posted an update… :rofl: (but it’s a good one.)

I updated the scripts Calculator and Entity Index with enhanced error replies to the LLM, as I noticed that it made some mistakes quite often and needed many tries to get it right.

Calculator reports now an error with the correct operations if e.g. something like sum was used instead of add.

Entity Index returns an similar error and the list of possible values when a not existing tag was supplied.

The updated scripts are in the orgiginal posts (see the first post for links).

1 Like

False wake word detections:

Sadly even “Ok, Nabu” isn’t nearly as reliable on the VPE as Alexa on Amazon Hardware (which in fact doesn’t show any false triggering anymore, as this improved vastly over the last years).
At least if you use the VPE outside your playground in your study like e.g. the living room with all its noise from conversations, radio and tv.

So, I added this to my prompt and it helps a lot.

Important note about mistakenly triggered voice input:
-------------------------------------------------
Most of the time we communicate to you 
through audio devices with microphone and speaker.
This sometimes leads to wrong wake word detections.
So, if you get text as input that doesn't make sense 
or sounds like we didn't want to talk to you, 
cancel the conversation.
Simply use a single space charater as response in this case. 
Don't reply with any text or questions.

edit: I missed to explain the LLM in a detailed way how to differentiate between false and correct wake work detections.
This made the LLM often respond in a weird way as it wasn’t sure if the question should be answered or not.
Better use this prompt to get more reliable results:

The VPE simply ends the conversation with a short red blinking ring and no voice response.
(And more important: Not trying to make any sence of the text, followed by starting crazy actions in your smart home. My favorite so far was turning off the lights and starting the vacuum robot …) :wink:

5 Likes

Added my optional Node-Red flow to grab the entity aliases using Node-Red as a bonus step to the post of the Entity Index script.

Linked (as always) in the first post.

Long Term Aggregated Entitiy History - A script to access historical values (recorder statistics)

Can be used for questions like:

  • How much energy did we export this month?
  • Tell me the min/max temperatures in the garden over the last 30 days.

I noticed that gtp-4o-mini often fails to handle the conversion of time zone.

It likes to provide easier local time strings without timezone to scripts and either forgets about converting the UTC results back to local time (or tells you that there was no matching data in the tool result because it didn’t get the different time zones for input / output).

Giving hints and reminders about that in the tool and in the prompt didn’t help.

gtp-4.1-mini is far better with that.

To keep my Entity History script more usable with gpt-4o-mini, I edited the script to ask for local timezone dates and also to convert returned timestamps to local timezone.

This helps gpt-4o-mini a lot to keep on track.

Here’s the code:

alias: Entity History - Aggregated Statistics
icon: mdi:chart-timeline-variant
description: >
  Search long-term statistics in aggregated form. RAW event data is not
  accessible with this tool. Use it to get min, max, mean over a time range, or
  the amount of change between two timestamps (e.g., for cumulative, monotonic
  increasing sensors like energy). Suitable for numeric entities such as
  temperature, power (W), or energy (kWh).

  Tool usage and parameters:

  Exact entity_id is required. Use "Entity Index" first if needed.

  Pass entity_ids as a comma-separated string ("sensor.temp_1, sensor.temp_2")

  Required:
    - entity_ids: single entity_id name or a comma seperated list of multiple entity_ids
    - start_time: Time string in local time like '2025-07-30 14:00:00'
    - period: Aggregation timespan. One of 5minute, hour, day, week, month, total
    - aggregation_types: Multiple selection possible. Allowed values are change, max, mean, min

  Optional:
    - end_time: time string in local time like '2025-07-30 14:00:00'. if not provided, current time is used.

  Output on success:
    - result:
        <entity_id>:
          - For period in [5minute, hour, day, week, month]: array of window objects:
              { start: 'YYYY-MM-DD HH:MM:SS', end: 'YYYY-MM-DD HH:MM:SS', change?: number, mean?: number, max?: number, min?: number }
          - For period = total: array with exactly one object for the whole requested time range,
            same keys as above.
    - warnings (optional):
        <entity_id>: [list of aggregation types that had no data]

  Output on error:
    - error: message
    - code: error_code
    - missing_entities (optional)

  Notes & Hints:

  - Always provide dates in local time like '2025-07-30 14:00:00'. 
    Returned dates and times will be in the same format and local time.

  - A hint about power and energy sensors:
    - A PowerSensor returns a value for a given moment in Watt. Search min, max, or mean values in a timespan.
    - An EnergySensor provides complete kWh usage since installation of device and is cumulative, monotonic increasing. Search change e.g. per day, or the change from a given time until now.

  Examples:
    - Get min and max of temperature sensors over a timespan of a year, grouped by month for monthly temperature extremes
    - Get change value of a kWh sensor over a month with daily grouping, to get the daily energy usage of this entity
mode: parallel
fields:
  entity_ids:
    name: Entity IDs (comma separated)
    description: >-
      comma-separated string of a single or multiple entity_id names (e.g.
      "sensor.roomname_temperature". MANDATORY!
    required: true
    selector:
      text: {}
  start_time:
    name: Start time
    description: >-
      Set the start date / time of your search. Always provide dates in
      localtime like '2025-07-30 14:00:00'
    required: true
    selector:
      text: {}
  end_time:
    name: End time
    description: >-
      Set the end date / time of your search. If not provided, the current time
      is used. Always provide dates in localtime like '2025-07-30 14:00:00'
    required: false
    selector:
      text: {}
  period:
    name: Period
    description: >-
      Time based grouping / aggregation period. If the search is for a short
      time period which is not divisible by full hours, use 5minute as
      aggregation and calculate the sum using the Calculator tool. Use 'total'
      to get a single, collapsed result over the entire time range.
    required: true
    selector:
      select:
        options:
          - 5minute
          - hour
          - day
          - week
          - month
          - total
  aggregation_types:
    name: Aggregation Types
    description: >-
      Aggregated values that should be returned. Multiple allowed. Use mean as a
      default for getting a simple single value if not requested otherwise. For
      cumulative, monotinic increasing sensors like energy sensors (kWh) use
      "change" to get how much was added to the counter between start and end
      time. Attention: not every entity type has history values for all
      aggregation types (e.g. energy sensors only provide 'change' values).
    required: false
    selector:
      select:
        multiple: true
        options:
          - change
          - max
          - mean
          - min
sequence:
  - action: logbook.log
    data:
      name: "ENTITY HISTORY - AGGREGATED STATISTICS:"
      message: >
        ids={{ entity_ids }}, period={{ period | default('n/a') }},
        aggregation_types={{ aggregation_types | default('n/a') }}, start={{
        start_time | default('n/a') }}, end={{ end_time | default('n/a') }}
      entity_id: "{{ this.entity_id }}"
  - variables:
      _now: "{{ now() }}"
      _start_ts: >-
        {{ as_timestamp(start_time) if start_time is defined and start_time else
        none }}
      _end_ts: >-
        {{ as_timestamp(end_time) if end_time is defined and end_time else
        as_timestamp(_now) }}
      _duration_s: >-
        {{ (_end_ts - _start_ts) if _start_ts is not none and _end_ts is not
        none else none }}
      _start_dt: >-
        {{ as_datetime(start_time) if start_time is defined and start_time else
        none }}
      _end_dt: >-
        {{ as_datetime(end_time) if end_time is defined and end_time else _now
        }}
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ not entity_ids }}"
        sequence:
          - variables:
              result: >
                {{ {'error': 'Missing required parameter: entity_ids', 'code':
                'missing_entity_ids'} }}
          - stop: ""
            response_variable: result
      - conditions:
          - condition: template
            value_template: "{{ _start_dt is none }}"
        sequence:
          - variables:
              result: >
                {{ {'error': 'Missing or invalid start_time (use local time like
                YYYY-MM-DD HH:MM:SS)', 'code': 'invalid_start_time'} }}
          - stop: ""
            response_variable: result
      - conditions:
          - condition: template
            value_template: >-
              {{ period not in ['5minute','hour','day','week','month','total']
              }}
        sequence:
          - variables:
              result: >
                {{ {'error': 'Invalid period. Allowed: 5minute, hour, day, week,
                month, total', 'code': 'invalid_period'} }}
          - stop: ""
            response_variable: result
      - conditions:
          - condition: template
            value_template: |
              {% set lst =
                   (aggregation_types is string)
                   and (aggregation_types | string | lower | regex_findall('change|min|max|mean'))
                   or
                   (aggregation_types if aggregation_types is sequence else []) %}
              {{ (lst | list | length) == 0 }}
        sequence:
          - variables:
              result: >
                {{ {'error': 'Missing or invalid aggregation_types. Allowed:
                change, max, mean, min', 'code': 'invalid_aggregation_types'} }}
          - stop: ""
            response_variable: result
  - variables:
      stat_ids: |
        {% set raw = (entity_ids | string) %} {% if raw.startswith('[') %}
          {% set arr = raw | from_json %}
        {% else %}
          {% set arr = raw | regex_findall('([a-zA-Z0-9_]+\\.[a-zA-Z0-9_]+)') %}
        {% endif %} {% set ns = namespace(out=[]) %} {% for e in arr %}
          {% set eid = (e | string | lower | trim) %}
          {% if eid %}
            {% set ns.out = ns.out + [eid] %}
          {% endif %}
        {% endfor %} {{ ns.out }}
      typelist: |
        {% if aggregation_types is not defined or aggregation_types is none %}
          {{ none }}
        {% elif aggregation_types is sequence %}
          {% set ns = namespace(valid=[]) %}
          {% for t in aggregation_types %}
            {% set tt = (t | string | lower | trim) %}
            {% if tt in ['change','min','max','mean'] %}
              {% set ns.valid = ns.valid + [tt] %}
            {% endif %}
          {% endfor %}
          {{ ns.valid }}
        {% else %}
          {{ (aggregation_types | string | lower | regex_findall('change|min|max|mean')) }}
        {% endif %}
  - variables:
      missing_entities: >
        {% set all_ids = states | map(attribute='entity_id') | list %} {% set ns
        = namespace(missing=[]) %} {% for eid in stat_ids %}
          {% if eid not in all_ids %}
            {% set ns.missing = ns.missing + [eid] %}
          {% endif %}
        {% endfor %} {{ ns.missing }}
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ (missing_entities | length) > 0 }}"
        sequence:
          - variables:
              result: |
                {{ {
                  'error': 'Some provided entity_ids do not exist.',
                  'code': 'entity_not_found',
                  'missing_entities': missing_entities
                } }}
          - stop: ""
            response_variable: result
  - variables:
      _effective_period: |
        {% if period == 'total' %}
          {% if _duration_s is not none and _duration_s <= 3*24*3600 %}
            5minute
          {% else %}
            hour
          {% endif %}
        {% else %}
          {{ period }}
        {% endif %}
      _start_local_str: >-
        {{ as_timestamp(_start_dt) | timestamp_custom('%Y-%m-%d %H:%M:%S', true)
        }}
      _end_local_str: >-
        {{ as_timestamp(_end_dt) | timestamp_custom('%Y-%m-%d %H:%M:%S', true)
        }}
  - action: logbook.log
    data:
      name: "ENTITY HISTORY - DEBUG:"
      message: >
        effective_period={{ _effective_period }}, duration_s={{ _duration_s |
        default('n/a') }}
      entity_id: "{{ this.entity_id }}"
  - variables:
      call_data: >
        {% set d = {'statistic_ids': stat_ids} %} {% set d = d |
        combine({'start_time': _start_dt}) %} {% set d = d |
        combine({'end_time': _end_dt}) %} {% set d = d | combine({'period':
        _effective_period}) %} {% if typelist is not none and typelist|length >
        0 %}
          {% set d = d | combine({'types': typelist}) %}
        {% endif %} {{ d }}
  - action: recorder.get_statistics
    response_variable: stats_raw
    data: "{{ call_data }}"
  - variables:
      no_data: |
        {% if stats_raw is not defined or stats_raw == none %}
          true
        {% elif stats_raw | length == 0 %}
          true
        {% else %}
          {% set ns = namespace(total=0) %}
          {% for _k, v in stats_raw.items() %}
            {% set ns.total = ns.total + (v | length) %}
          {% endfor %}
          {{ ns.total == 0 }}
        {% endif %}
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ no_data }}"
        sequence:
          - variables:
              result: |
                {{ {
                  'error': "No history values found. Either the entity_ids have no long-term statistics enabled, or there are no datapoints in the requested timespan.",
                  'code': 'no_history_data'
                } }}
          - stop: ""
            response_variable: result
  - choose:
      - conditions:
          - condition: template
            value_template: "{{ period != 'total' }}"
        sequence:
          - variables:
              result: >
                {%- set ns = namespace(out={}) -%} {%- for entity_id, entries in
                stats_raw.statistics.items() -%}
                  {%- set e = namespace(items=[]) -%}
                  {%- for it in entries -%}
                    {%- set start_local = (it['start'] | as_datetime | as_local).strftime('%Y-%m-%d %H:%M:%S') -%}
                    {%- set end_local   = (((it['end'] | as_datetime) - timedelta(seconds=1)) | as_local).strftime('%Y-%m-%d %H:%M:%S') -%}
                    {%- set obj = (it | tojson | from_json) | combine({'start': start_local, 'end': end_local}) -%}
                    {%- set e.items = e.items + [obj] -%}
                  {%- endfor -%}
                  {%- set ns.out = ns.out | combine({ entity_id: e.items }) -%}
                {%- endfor -%} {{ {'result': ns.out} }}
          - stop: ""
            response_variable: result
  - variables:
      total_result: >
        {%- set out = namespace(map={}, warns={}) -%} {%- for entity_id, entries
        in stats_raw.statistics.items() -%}
          {%- set acc = namespace(
                have_change=false, change_sum=0,
                have_min=false, min_val=none,
                have_max=false, max_val=none,
                have_mean=false, mean_sum=0, mean_count=0
              ) -%}
          {%- for it in entries -%}
            {%- if 'change' in it and it['change'] is not none -%}
              {%- set acc.have_change = true -%}
              {%- set acc.change_sum = acc.change_sum + (it['change'] | float(0)) -%}
            {%- endif -%}
            {%- if 'min' in it and it['min'] is not none -%}
              {%- set acc.have_min = true -%}
              {%- set acc.min_val = ( [acc.min_val, it['min']]|reject('equalto', none)|min ) if acc.min_val is not none else (it['min'] | float(0)) -%}
            {%- endif -%}
            {%- if 'max' in it and it['max'] is not none -%}
              {%- set acc.have_max = true -%}
              {%- set acc.max_val = ( [acc.max_val, it['max']]|reject('equalto', none)|max ) if acc.max_val is not none else (it['max'] | float(0)) -%}
            {%- endif -%}
            {%- if 'mean' in it and it['mean'] is not none -%}
              {%- set acc.have_mean = true -%}
              {%- set acc.mean_sum = acc.mean_sum + (it['mean'] | float(0)) -%}
              {%- set acc.mean_count = acc.mean_count + 1 -%}
            {%- endif -%}
          {%- endfor -%}
          {%- set item = {'start': _start_local_str, 'end': _end_local_str} -%}
          {%- if typelist is none or 'change' in typelist -%}
            {%- if acc.have_change -%}
              {%- set item = item | combine({'change': acc.change_sum}) -%}
            {%- else -%}
              {%- set out.warns = out.warns | combine({ entity_id: (out.warns.get(entity_id, []) + ['change']) }) -%}
            {%- endif -%}
          {%- endif -%}
          {%- if typelist is none or 'min' in typelist -%}
            {%- if acc.have_min -%}
              {%- set item = item | combine({'min': acc.min_val}) -%}
            {%- else -%}
              {%- set out.warns = out.warns | combine({ entity_id: (out.warns.get(entity_id, []) + ['min']) }) -%}
            {%- endif -%}
          {%- endif -%}
          {%- if typelist is none or 'max' in typelist -%}
            {%- if acc.have_max -%}
              {%- set item = item | combine({'max': acc.max_val}) -%}
            {%- else -%}
              {%- set out.warns = out.warns | combine({ entity_id: (out.warns.get(entity_id, []) + ['max']) }) -%}
            {%- endif -%}
          {%- endif -%}
          {%- if typelist is none or 'mean' in typelist -%}
            {%- if acc.have_mean and acc.mean_count > 0 -%}
              {%- set item = item | combine({'mean': (acc.mean_sum / acc.mean_count)}) -%}
            {%- else -%}
              {%- set out.warns = out.warns | combine({ entity_id: (out.warns.get(entity_id, []) + ['mean']) }) -%}
            {%- endif -%}
          {%- endif -%}
          {%- set out.map = out.map | combine({ entity_id: [ item ] }) -%}
        {%- endfor -%} {%- set payload = {'result': out.map} -%} {%- if
        out.warns | length > 0 -%}
          {%- set payload = payload | combine({'warnings': out.warns}) -%}
        {%- endif -%} {{ payload }}
  - action: logbook.log
    data:
      name: "ENTITY HISTORY - TOTAL:"
      message: >
        effective_period={{ _effective_period }}, entities={{
        (stats_raw.statistics.keys() | list | length) }}, warnings={{
        total_result.warnings | default({}) | tojson }}
      entity_id: "{{ this.entity_id }}"
  - stop: ""
    response_variable: total_result

I also added this to my LLMs prompt in addition to exposing this script:

You can also check historical values of an entity using the tool “Entity History”.
But you REALLY need the entity_id of the device to get the historical values.
If you want to check the max temperature from yesterday, outside in the “driveway”, first search for the TemperatureSensor in this area using the “Entity Index” tool.

Hints:

  • Energy (kWh) sensors are cumulative, monotonic increasing counter.
    NEVER reply with their current state when asked about the energy used/produced in a specific timespan.
    Use the “Entitiy History” tool with the aggregation type “change” to get the deviation between start-time and end-time
2 Likes

History is really one of this things that makes Assist so much more useful compared to dumb Alexa. :relieved:

I don’t think I would have been faster looking this up manually on my phone or laptop. :slight_smile:

2 Likes

Agreed. History + an index makes an LLM absolutely shine. (no people we are NOT overhyping that no index = dum bot. It’s right there with give context.)

Im putting the polish on files. Er… Zen storage. So take what you have and add the ability to have structured somewhat freeform indexed storage… :sunglasses:

What till you see what THAT does… (hint - Store templates for that stuff, instructions for mini - programs (supplemental prompts) then becomes the precursor of dynamic prompt generation…

1 Like

I knew that I could never replace Alexa in the living room without REALLY good working music control.

It’s like you know my family, and replacing Alexa is my goal.

Music control is its main function of our Alexa devices. My experience with LLMs is limited to the web interface of Chat GPT. For the kind of thing you’re doing, what resources would you recommend to get a start on my learning journey?

Also, your approach conceivably be done without a GPU, or on a lower powered one (for example, if I repurposed my old gaming laptop with a mobile 1050ti)? My home assistant is currently running in a VM on a TrueNAS machine with an AMD 8600G. (I included the 8600g in my build specifically for the NPUs in prep for voice control. Later I learned that there’s no easy way to access them through TrueNAS or Home Assistant. Woe is me.)

Thank you for sharing all of this, any any guidance is greatly appreciated.

That’s the advantage of using an internet-based LLM. There is no significant load locally.

This comes at the cost of losing complete local data control (at least, data that goes through the OpenAI API is not used for training purposes, unlike their end customer product ChatGPT), as well as usage costs in tokens.
But even yesterday, when I tested the Entity History tool intensively with the gpt-4.1-mini model (4x more expensive than gpt-4o-mini), I only had usage costs of about $1 for more than 300 requests.
And the history tool consumes a lot more tokens than a simple request due to the sometimes long data sets it transfers between multiple tool calls.

I just looked at the OpenAI dashboard and without intensive testing of new scripts, I typically spend about $0.02 to $0.10 per day with gpt-4o-mini.
As long as you don’t do high-frequency automated evaluations with the models in addition to voice control (for example, constant image evaluations of video surveillance or similar), it’s hardly worth investing in your own hardware for AI at these prices.

To get started, I would simply create an OpenAI account and add an assistant using the HA OpenAI integration.
Play around with it a little, check out the tips about prompts here and in similar threads to get a feeling for what context is important, so that the LLM better understands what you want. :wink:

The next step would be to add a few simple tools (maybe don’t start with music right away, which requires some configuration due to the interaction between Music Assistant, the services set up there, and the players in HA) and test them.

One more tip about music: Band name recognition doesn’t seem to be any worse than what Alexa delivers. But if something is recognized incorrectly, the music service’s search function will often fix it if you use a streaming service like Spotify in Music Assistant. There are also AIs behind the scenes that try to find the desired hits, even if a user makes a typo. Otherwise, Assist also supports spelling, which helps in the really tough cases. :wink:

If you run into problems, simply ask here or if it doesn’t fit the topic, create a new one and ping me per DM.

I suppose it makes sense to just learn how to use the model before I add the extra learning burden of making it run locally. I can always tackle that in the future. $0.10 a day is definitely cheaper than even just the power running a local LLM at idle. That’s a pretty reasonable cost for the potential benefit (at least until they decide it’s time to make a profit, then who knows). Since I’ve already been giving my soul to Amazon for nearly a decade it’s not such an unreasonable compromise to switch and hand that data to someone else. At least Open AI doesn’t have the proven history of misuse and won’t announce ads at me.

2 Likes

I Have been following this thread, GREAT WORK!. I recently tried the entity history solution you had except that I am using a local llama llm, and I didn’t get really a clean response, here is what i got:

when was the first time in the past 3 months the office speakers turned on?

It looks like I couldn't get the entity history for you. Let me try again. To answer your question, I will call the Entity History tool to find when the office speakers turned on in the last 3 months. Please bear with me for a moment. {"entity_history": {"results": [{"timestamp": "2025-06-12T00:00:00", "previous_value": "off", "new_value": "on"}], "meta": {"count": 1, "total_matches": 1}}} The office speakers turned on for the first time on June 12th, 2025.

Ok, so far I didn’t get what the “state” value type in recorder.get_statistics is good for. As it only accesses the statistics aggregations and not the short-term history which keeps all values for a few days.
And I thought long-term statistics is only saved for numeric values like energy, temperature, luminance, …

But now you’re here, posting this result. :stuck_out_tongue:

So I’m rather confused, if:

  • you added something to your recorder config to get this working
  • If I’m doing something wrong as I don’t see any binary_sensors or other classes with “on”, “off” in the statistics in my system
  • If your LLM is making up fake data, even that it looks a lot like a response that HA might give
1 Like