PSA: 2024.7 recorder problems

Reading through this topic as well as the release notes, it’s not terribly clear to me whether the issues have been fully addressed in 2024.7.4. Are fixes for some of the issues being held until the .8 release?

I think no fixes regarding this issue are in 7.4.
So if 7.3 doesn’t work for you i think you need to wait for 8.0.
Probably the impact of the 2open issues was too high for a minor realease. But I assume a HA dev knows the real reason for this.

That’s what I surmised as well. I never tried any of the .7 releases, due to all of the issues being reported and since there wasn’t a ‘killer feature’ I needed. Guess I may just wait until one of the .8 releases.

I was going to wait for .8.x, too, but yesterday turned out rainy and I had nothing better to do. So… update to 2024.7.3. This was before I knew .4 was coming.

Anyway, no problem with the Recorder database update. Admittedly, I work pretty hard to keep the database size manageable, but I figured with my luck I’d be among those impacted by this.

I feel better going to a .3 update than a .0, and I had the time, so it was worth the risk for me. As always, YMMV.

I Have the Same Problem and the database is growing up every day. Ist there now way to fix it?

You can start here:

I know this but i like to have all the data. For long term i have influxdbv2 running. I have the Bug from 2024.7.2 or so. From there now every day the database is growing up. So for now i wate for the 2024.8 i hope the will fix it soon. DB sice is now 31 GB

Recorder still stops after a purge even on 2024.8 release. Looks like the problem hasn’t yet been fixed.

1 Like

My recorder is still not purging on 2024.8.0

That’s disappointing!

If one of the above PRs doesn’t fix your system, it will be a different solution from the PRs / issue reports posted above. In this case, the developer who looks into the problem will need a copy of an affected database since it presents a problem that has yet to be seen. If you can share your database, please open a fresh GitHub issue.

Hope this is correct: Recorder still locking up even after update to 2024.8.0 · Issue #123348 · home-assistant/core · GitHub

For everyone who ran out of disk space during the original table rebuild, there is another fix coming

1 Like

Hello, I’m still have an issue with the db, also. But I’m unsure if its the same problem.
I have deactivated the automatic recorder purge, when I do a manually purge with activated repack I have a corrupted database and home assistant comes up with a clean db. Using sqlite here, current DB size ~13.5GB. Before deactivating automatic purge ~6-8GB.

Is this the issue in this thread or is the issue that the recorder got only stuck?

2024.8.1 will fix the issue where the rebuild wasn’t retired if it could not complete the first time.

Please be sure to check your disk space before updating

Thanks, I was indeed initially confronted with a full disk after upgrading now I’ve come to think of it.

Is there a way to kick this rebuild off manually in it’s current state to test the procedure?

Here are the sql statements for an OFFLINE migration. Be sure to make a backup first. This code is only safe to run with schema 43 or 44.

- Do not run it on older database (from before May 2024).
- Do not run it on a live system, it may destroy your data.
- Do not run this on anything other than SQLite.

BEGIN;
PRAGMA foreign_keys=OFF;
COMMIT;
BEGIN;
CREATE TABLE states_temp_1720542205 (
	state_id INTEGER NOT NULL, 
	entity_id CHAR(0), 
	state VARCHAR(255), 
	attributes CHAR(0), 
	event_id SMALLINT, 
	last_changed CHAR(0), 
	last_changed_ts FLOAT, 
	last_reported_ts FLOAT, 
	last_updated CHAR(0), 
	last_updated_ts FLOAT, 
	old_state_id INTEGER, 
	attributes_id INTEGER, 
	context_id CHAR(0), 
	context_user_id CHAR(0), 
	context_parent_id CHAR(0), 
	origin_idx SMALLINT, 
	context_id_bin BLOB, 
	context_user_id_bin BLOB, 
	context_parent_id_bin BLOB, 
	metadata_id INTEGER, 
	PRIMARY KEY (state_id), 
	FOREIGN KEY(old_state_id) REFERENCES states (state_id), 
	FOREIGN KEY(attributes_id) REFERENCES state_attributes (attributes_id), 
	FOREIGN KEY(metadata_id) REFERENCES states_meta (metadata_id)
);
INSERT INTO states_temp_1720542205 SELECT state_id,entity_id,state,attributes,event_id,last_changed,last_changed_ts,last_reported_ts,last_updated,last_updated_ts,old_state_id,attributes_id,context_id,context_user_id,context_parent_id,origin_idx,context_id_bin,context_user_id_bin,context_parent_id_bin,metadata_id FROM states;
DROP TABLE states;
ALTER TABLE states_temp_1720542205 RENAME TO states;
CREATE INDEX ix_states_metadata_id_last_updated_ts ON states (metadata_id, last_updated_ts);
CREATE INDEX ix_states_last_updated_ts ON states (last_updated_ts);
CREATE INDEX ix_states_context_id_bin ON states (context_id_bin);
CREATE INDEX ix_states_attributes_id ON states (attributes_id);
CREATE INDEX ix_states_old_state_id ON states (old_state_id);
PRAGMA foreign_key_check;
COMMIT;
BEGIN;
PRAGMA foreign_keys=ON;
COMMIT;
1 Like

I think I might have a db older than May 2024 (mine was created at least a year ago). But I doubt you mean creation date. Could you clarify please?

I think I might have a db older than May 2024 (mine was created at least a year ago). But I doubt you mean creation date. Could you clarify please?

# sqlite home-assistant_v2.db
sqlite> select * from schema_changes;
1|33|2023-03-05 03:12:00.672507
2|34|2023-03-08 12:03:43.173219
3|35|2023-03-08 12:03:43.223264
4|36|2023-04-05 19:53:47.382529
5|37|2023-04-05 19:53:47.405016
6|38|2023-04-05 19:53:48.134406
7|39|2023-04-05 19:53:48.224035
8|40|2023-04-05 19:53:48.270669
9|41|2023-04-05 19:53:48.278902
10|42|2023-11-01 17:59:36.802397

Make sure the last row is 43 or higher. In the above example, its only 42.

gotcha, my db schema was at 44:

sqlite> select * from schema_changes;
...
30|42|2023-11-03 00:08:36.442936
31|43|2024-04-05 14:10:17.169585
32|44|2024-08-08 07:54:57.273545

fyi: Just finished executing the statements on a 25GB db. They took around 90 minutes to complete, mostly pegging one core at 100% for long periods of time. After uploading it back to HAOS, HA was able to successfully purge again.

1 Like