2024.9: Sections go BIG

No; didn’t find it

The only reason I can think of it would not be there is if you do not have any grid or solar sensors.

HA needs to be able to calculate the whole house load before it can derive the untracked consumption from that.

Do you have grid sources, or some other power source?

Otherwise it’s maybe just a cache clearing issue. There’s no setting to enable or disable this.

Hi @NathanCu, thanks for your answer, but I’m not sure how to do that.

I had this automation that was working fine with the normal intents:

alias: Alarm
description: This automation activates the phone alarm in 'time_amount' minutes.
trigger:
  - platform: conversation
    command:
      - Set the alarm in {time_amount} minuts
condition: []
action:
  - data:
      message: command_activity
      data:
        intent_action: android.intent.action.SET_ALARM
        intent_extras: >-
          {% set alarm_time = now() +
          timedelta(minutes=trigger.slots.time_amount|int) %} {{
          'android.intent.extra.alarm.HOUR:' ~ alarm_time.hour|string ~
          ',android.intent.extra.alarm.MINUTES:' ~ alarm_time.minute|string ~
          ',android.intent.extra.alarm.SKIP_UI:true' }}
    action: notify.mobile_app_xxx
  - set_conversation_response: The alarm will ring in {{trigger.slots.time_amount}} minutes.
mode: single

So I tried this in my configuration file:

intent_script:
  PhoneAlarm:
    description: Set the alarm in the phone (in minutes)
    speech:
      text: Alarm set in {{ trigger.slots.time_amount }} minutes
    action:
      action: notify.mobile_app_xxx
      data:
        message: command_activity
        data:
          intent_action: android.intent.action.SET_ALARM
          intent_extras: >-
            {% set alarm_time = now() + timedelta(minutes=trigger.slots.time_amount|int) %} {{
            'android.intent.extra.alarm.HOUR:' ~ alarm_time.hour|string ~
            ',android.intent.extra.alarm.MINUTES:' ~ alarm_time.minute|string ~
            ',android.intent.extra.alarm.SKIP_UI:true' }}

Following the documentation from: Intent Script - Home Assistant
But I can’t see how the LLM is supposed to pass the variable ‘time_amount’ to the intent_script.

It seems that Yale Access Bluetooth can’t fetch offline keys from new Yale (Home) integration, as it used to with August integration. Any suggestions?

1 Like

I’ve been trying since last night after switching from August to Yale, the Bluetooth connection is broken! I even created a new account, performed factory settings on my Yale Linus, re-integrated everything + Connector WLAN
 Nothing has helped. A restart of Home Assistant, instance of Yale Access Bluetooth and Yale deleted and Yale re-added successfully
 Now I have a found Yale Access Bluetooth and am asked for offline key and slot
 What else to try? Also checked all logs and protocols, no success. - I also tried the old key and slot - searched everything and found nothing for the key and slot
 Does anyone have a tip?
:frowning:

image

Integration with Yale Access Bluetooth

  • The Yale integration must support the lock. - yes Linus Lock V1
  • The Yale Access Bluetooth integration must support the lock. - Yes Linus Lock V1
  • The Bluetooth integration must be active and functional. - Yes, everything went perfectly under August.
  • The lock must be discoverable by the Yale Access Bluetooth integration. - Yes, Screenshot
  • The account logged in with the Yale integration must have the offline keys. - yes, only one account all new


This clearly states that the offline key will be executed. All points are taken into account. + Yale lock reset, everything deleted from account and account deleted! Re-created and re-linked everything.

Nice update to the sections!
It needs one more thing and then I can convert all my dashboards :slight_smile:

Is changing the sections layouts’ grid size on the roadmap
 now they are all fixed so making 3 equal icons on a row still needs an hstack card extra


In my experience, the DB migration requires at least as much free disk space as the size of your current DB. After the migration, the DB is about 25% bigger.

1 Like

Seems that a similar estimation is for a “repack of a native sqlite base” case.

1 Like

Worth to mention that if it’s true, then it’s not db limitation but HA architecture.
I understand 10days limit for sqlite running on raspberypi class HW. But with resonable resources said postgress can maintain and serve in real-time data out of unlimited stored size. If db and app are properly designed.

on the other side of the coin, indeed time series databases are even more efficient for such a job. I recently installed Postgres with TimescaleDB, LLTS and Grafana. All running on rpi4 together with HA. and indeed, graphs provided by grafana are rendered way faster than Apex charts.
Another benefit is, that such postgres-based db cannot break (benefit of fully transactional db) In worst scenario some records might be lost but anomalies in stored data will definitively not affect HA (which cannot be said about sqlite sometimes messed up by HA)

2 Likes

btw the Energy dashboard. I can see some love put on this recently.
What hit me not long ago is, that the dashboard is tightly coupled with sensors set to it (with their entity names)

If one collects energy data for years and then installs FVE, he likely has to change the dashboard settings replacing energy sensors with others, likely provided by a solar inverter (consumption and production)
In such case house consumption for past days/years is gone from the dashboard.

The same will happen if you change the energy sensor for another one, unless you secure the same entity name

Do you know about incoming improvement in this area?

I don’t, but there’s this to help, even though it’s quite manual: HA Energy Dashboard FAQ · GitHub. See #1.

That actually is a big issue now that HA assumes what those values mean.

Let’s take my installation. I have:

  • Smart meter reporting as one sum over all phases
  • A sensor for each of the 3 phases for the whole house (reporting 2 values, inbound and outbound)
  • A sensor for each of the 3 phases per subpanel
  • Multiple sensors for individual circuits in the subpanels
  • Individual devices reporting their own consumption

Assuming the lower 4 can be summed up and compared to the first one is nonsense.

(It’s already annoying that the smart meter sums don’t match any combination of the per-phase measurements of my own sensors.)

Thank you for pointing me out to that. Didn’t know about it at the time I changed the dashboard. And this is the worst thing: most of us probably notice that days after changing the dashboard.

From the linked article, it seems like it’s possible with a little bit of trickery. But if “old” sensor has to stay in the system (my case) it’s a kind of embarrassing proposal.

BTW I’m not sure it’s even possible to rename sensors provided by an inverter integration.

??? really why? I’m admin of multiple Postgres databases, size really is not an issue. And if (possibly) is table partitioning is our friend. For example OpenStreetMap is running on Postgres and whole planet is really big. Compared to OSM, 900 days history is probably very small DB.
I don’t understand the strange approach for updating schemas. Convert integer column to bigint is really simple, for example alter table ha.states alter column state_id type bigint and we’re done.
I remember some HA DB migration in the past. Whole schema had about 700 MB size. The migration took hours on SSD and generated 35 GB (GIGABYTES!) of transaction logs. for migration 700 MB database. It’s about 50 times bigger size of logs than was the whole database size. I saw queries as “update 
 set old_state = state limit 999”, “delete 
 limit 999” etc.
How can happen “Not all migrations were successful despite being marked as successful.” ? Doing the migration in transaction can prevent this (all or nothing).
If I understood recorder/migration.py, upgrade to 2024.9 will migrate DB schema to version 46 - it will drop foreign constraints, will change bigint to bigint :slight_smile: and schema version 47 will re-create constraints dropped in schema version 46, right? In this case we should insert rows into table ha.schema_changes to avoid this unnecessary migration
I love HA, but DB migration should be really improved a lot. Why this strange approach? Is some SQLALCHEMY limitation?

2 Likes

There are very clever people working on HA. However the database seems to me too have always been a weak point. Perhaps they need more database people. Yes, that is a hint!

6 Likes

This should probably come in the future

First, this graph should also have the ability to hide specific devices (would be perfect, if it would just inherit the settings from the ‘track single devices’ graph above)

As long as an entity had a unique ID, it can be renamed. It’s not an issue in most cases.

Read carefully. It doesn’t stay. It’s just the order that’s very important: if you delete the old ones you’ll delete its history too, so you need to e sure there’s overlap. Or do you mean the old entity might have a name that’s tied in a very descriptive way to the old integration or setup? Then yes, perhaps that’s not ideal, but personally I rename my entities to keep them functional when I first use them. For example, I won’t have an entity called sensor.shelly_pm_energy_1. That might be the name when it’s first created, but already then I’ll rename it to sensor.outbuilding_energy which is descriptive of its function and then it doesn’t matter if it comes from an old or new integration.

1 Like

is a custom component.
You should report issues within the repository of this integration :slight_smile:

If you’re just looking for an argument, perhaps take this somewhere else, or get your hands dirty and contribute to the project. Either way, you’re trivialising this. Simple, fast, risky and efficient isn’t all the same.

In really big databases that must be online 24/7 it is not that simple. Actions like these will lock up a system with table/page/row locks (for one). If downtime cannot be scheduled, then you’d use tricks such as operating on a copy of a table (underneath a view), or drop your indexes to make the alter faster before re-adding the indexes (and/or the FK like they’ve done here). Even for an offline migration that can be too much. In the case of HA, people don’t run their systems on commercial hardware, even though a 10-100GB DB doesn’t seem that big.

Partitioning would’ve made no difference: When doing an alter, your DB needs to be fully consistent before you resume operation. It’s not like you can change one partition by itself. You can fake things under a view, but that adds a lot of complexity.

I think part of the challenge is that ideally you’d want to redesign the whole DB from the ground up, but that will make migrations very hard (changing a flying plane and all that). HA also supports many engines, which means you need to account for the quirks of each, and that you cannot fully utilise the specific strengths of certain engines.

7 Likes

hm
 this could already be done.
You can add your individual water meters to the water usage dashboard, and they will be shown individually

(Just similar, as you could also add your individual devices to the main electrical energy usage graph)

image

This should work just similar as in this screenshot