2023.4: Custom template macros, and many more new entity dialogs!

All the money we pay them for their time and labor! How dare they be behind!! Heads need to roll!

Because it takes you 4 hours to do what should be a 15 minute job at most.

That’s only the case if there are database optimisations included in the update. And even then you can restore a full backup - that includes the database.

1 Like

Quite. Off with their heads!

Not, when you run DB on separate machine, for example MariaDB on Synology NAS :slight_smile: And even if you have properly corelated backup of database done on NAS, you risk some data losss of most recent data. No perfect solution solution exists, I’m afraid.

Yes and no (cliche :slight_smile: ) From one perspective it makes alot of sense to be able to have environment as much stable as possible, but on the other hand lets take a look at all the breaking changes that are implemented with every release. Can you imagine catching up all of these after lets say one year? I think that until internal architecture of core and all major add-ons is stable and no breaking changes are done here, there is no place for LTS yet. Just looking at this forum you can find probably hundreds of posts of people who lived happily with somehow stale system for several months and then decided to upgrade to most up to date version (usually due to introduction of desired functionality) and then run into serious problem of their system creashing. Smaller, but frequent changes make the whole process lots easier.

1 Like

Or as us kiwis say “Yeah … Nah”

1 Like

…and us Aussies too. Let’s be honest, you guys probably copied us. :stuck_out_tongue_winking_eye:

2 Likes

Like Pavlova, Phar Lap, and all the other bollocks you stole. (You can keep Russell Crowe though). And can you identify any academic studies of “Yeah Nah” at Aussie Universities? Probably not, given the inherent contradiction in “Australian seat of higher learning”

:wink:

@mirekmal, that’s why I use the MariaDB add-on but solely for the HA history.

I use a strict naming convention for my entity IDs but because integrations like ZHA generate their own based on some bizarre internal scheme, adding new zigbee devices become a complex process in that once the new device is added, I then need to go in and manually rename all attributes to conform to this convention. I then need to use MySQL client to rename the state record entity IDs to conform to this convention and reconnect to history. A bit of a pain.

This was the case at least until 2023.4 as the HA developers have abandoned all standard conventions for using an RDBMS. Previously the Schema sensibly reflected the field naming and typing conventions. Now:

  • Of the 27 fields in state 17 are always NULL. If you no longer use a field in a record then drop it from the schema and update the access logic accordingly.
  • Here are the last three records (non-null fields only):
MariaDB [homeassistant]> select state_id, state, old_state_id, attributes_id, last_updated_ts, hex(context_id_bin), metadata_id from states order by state_id desc limit 3;
+----------+-------+--------------+---------------+-------------------+----------------------------------+-------------+
| state_id | state | old_state_id | attributes_id | last_updated_ts   | hex(context_id_bin)              | metadata_id |
+----------+-------+--------------+---------------+-------------------+----------------------------------+-------------+
| 22103002 | 251   |     22102948 |       1692411 |  1682606106.03683 | 0187C323A5B4C638D373ED28E0038EE0 |          50 |
| 22103001 | 241   |     22102947 |       1692335 | 1682606104.809538 | 0187C323A0E9085035666CCDA7DF403A |          31 |
| 22103000 | on    |     22102998 |       1994667 | 1682606084.416874 | 0187C323514080109935301BEBAA55D7 |         164 |
+----------+-------+--------------+---------------+-------------------+----------------------------------+-------------+
3 rows in set (0.004 sec)
  • As you can see, all non-numeric/link info is encapsulated in a packed tinyblob. In practice this makes the DB unusable outside of the HA app, and quite honestly using a MariaDb pointless.
  • The statistics, statistics_runs and statistics_short_term tables have been similarly mangled.
2 Likes

@tom_l, I clearly seemed to have pissed you off somehow, as this is getting a touch ad-hominem If this is the case, then I apologise as this wasn’t my intention.

I welcome the goal of being able to recover a bricked HA system as a “15 minute” job, but I also suggest that this is far from current reality. I would suggest that many if not most HA end users will use an RPi-class SBC sitting in an equipment rack or on a shelf next to their router, and most likely headless without monitor or keyboard attached or some sort of KVM. Few will be familiar using the Linux bash command line on a fully featured OS such as Debian, let alone on a minimal RO Buildroot one. If the system boots, but the supervisor doesn’t start up, then you also need to be familiar with jounalctl and the Docker CLI to try to diagnose why and what is failing.

As I said, I am one of the users who has sysadmin, docker, mysql, … experience, so yes, I did “waste time” trying to diagnose and recover so that I could log an evidence-based ticket, and yes I gave up.

So now I have a dead HA instance which is not easily recoverable and no HA CLI let alone HA GUI available, and with a full backup buried somewhere on one of the partitions that yet hasn’t been synced to my Google cloud storage, but which I need to extract before I can do a fresh reinstall and recovery. I now know to look in the /supervisor/backup folder on the hassos-data partition and to copy off the latest full backup tarball onto your Dev machine; reimage the device and do a full install, uploading this backup as per the documentation: RESTORING A BACKUP ON A NEW INSTALL. This all takes a couple of hours, even if I drop any attempts to workout why the upgrade failed or try other recovery paths. Or I switch to using a Proxmox VM.

2 Likes

potential bug: SmartMeter EDL21 Positive active energy total no longer supported

I did the upgrade to 2023.4.6. It took a couple of hours to upgrade a 2,6GB database that as a 15 days data recorder. All good just one issue which will be fixed soon already open:

Btw all iOS issues with frontend seems fixed as well. Keep the good work and ignore some non-constructive comments which seems to reflect more personal frustrations and miss expectations.

1 Like

Did you post an issue on githhub?

This is really helpful thanks.

I’m still on 2023.3 here, how do you perform operations on the states table now? I used to rename entity_id’s too when switching out devices etc and manually manage some state data

1 Like

I’ll post back if I figure a workaround, but when is another Q.

1 Like

I finally pulled the trigger and updated, lost most of my history anyway :slight_smile:

Probably not much point in me keeping 60 days of history with monthly hass updates :wink:

Someone posted here a link to a thread with a collection of custom templates made by Petro.
I am looking for a macro to convert a number of seconds to “HH:MM:SS” format.
Surely it is not a super difficult task to program it, just wanted to use a ready good macro which was aready tested ))). And may be I will find this macro in that thread…
Can someone share this link?

In hacs too :slight_smile:

Great thanks, Nick!

Hmm, cannot find my macro there… Will keep looking.

OMG, turned to be super simple:
изображение

Thread here too Easy Time Macros for Templates!

1 Like

Late to the party but I upgraded to 2023.4.6 few hours before. Since than, the system load is going like crazy:

1 = last backup, normal
2 = normal operation with 2023.3.6
3 = the update itself to 2023.4.6
4 = after the update, what makes me feel really uncomfortable meanwhile

Database stuff should be finished already, at least the scheme update was done according to HA log (35 to 41), also few indices have been dropped. Database is < 3 GB, system is a Pi 4.

No idea what stresses the system that heavily. It meanwhile becomes close to unusable! Any ideas???


(edit: seems to have magically solved itself after few more hours, started to come down once my notebook disconnected from HA interestingly… the 6 o’clock peak is from a scheduled recorder repack, after that back to normal as of now…)