Yes but when you delete the database it will start working instantly after you restart. And I’m trying to figure out if you’re looking in the correct spot or not. It was not clear by your posts until the last one. Can you post your configuration.yaml file?
To clarify: The issues in this thread are related to database migrations. When you delete the database and start new, there is no migration. Meaning your issues are not tied to the 2024.7 recorder issues. I’m trying to identify what your issue actually is.
I have deleted the db more than once and it does get recreated but my problem remains that there are no logbook entries. I could start a new thread if that would be better. Here is my configuration.yaml file. Thanks for any help you can provide. configuration.yaml
I just made a bit of a discovery although I don’t know what to make of it. If I have the Sensor info open when I have the device change state then the logbook in the window shows the entry for that change but only while this window is open. If I close the window and re-open it, that recent logbook entry no longer shows. Here are 3 screenshots, before the state change, after the state change and after I close and re-open it. (The 5am entries appear to be after a restart and those are the only logbook entries that show up for any device).
After finally biting the bullet and upgrading from 2024.6 to the new 2024.9.1 release I am having the same issue as mentioned above, no values get logged/stored it seems. Value’s do show up when you have a specific entity open and wait for it to update. Just when you close and open it again there is no data. (both browser and app behave the same). What is interesting to note as well I am also exporting to InfluxDB and there the data is being stored, so I am positive HA is handling the data. I have rolled back a backup but no luck, resulted in a non functional instance and having to hard restore from a full system and DB backup from a month ago to get back to the state I find myself, lucky only a few day’s of data loss.
I am using MariaDB as a storage DB.
My current version stack:
Core 2024.9.1
Supervisor 2024.08.0
Operating System 13.1
Frontend 20240906.0
I really hope there is something I am just missing I would hate to loose years of data if I have to start from scratch.
Is there any thoughts of what this can be? Or where I can best try and look?
If my explanation is not clear maybe this image explains the problem I’m facing
Hehe isn’t it always. Sadly though is I have scoured the logs for hours, did the migration twice (after restoring the backup) but nothing really pops up as special or new (There are plenty warnings and errors but I’m aware of those and have been there for a long time without issue.)
As to what I can find:
MariaDB log contains nothing past a successful boot of the container
Core log has nothing that is new
Supervisor log has one error I can not place but my feeling is it’s not related
2024-09-07 19:09:25.022 ERROR (MainThread) [supervisor.api.ingress] Stream error with http://172.30.33.2:1337/stable-0b84523121d6302fbe30eda7899ec3b81810748e/static/out/vs/loader.js: Cannot write to closing transport
2024-09-07 19:09:25.022 ERROR (MainThread) [supervisor.api.ingress] Stream error with http://172.30.33.2:1337/stable-0b84523121d6302fbe30eda7899ec3b81810748e/static/out/vs/workbench/workbench.web.main.css: Cannot write to closing transport
2024-09-07 19:09:27.439 ERROR (MainThread) [supervisor.api.ingress] Stream error with http://172.30.33.2:1337/stable-0b84523121d6302fbe30eda7899ec3b81810748e/static/out/vs/editor/common/services/editorSimpleWorker.nls.js: Cannot write to closing transport
NodeRed has no logging beyond normal
Mosquitto has no errors
So I was getting lost as to where to go next, a fresh install as mentioned works but I really dread loosing all my history and most of my configuration now that it lives in DB as well. I noticed a vert similar problem statement from ihf1 and I thought might be worth asking if there is anything know that could case this behaviour.
For anyone coming across this topic, I have done some more experimenting.
Removing of all history returned the behaviour to normal
Pushing in some history after an upgrade retains normal behaviour
Booting with ‘old’ data (aka the data has a large time gap ~1 month to the current time) in a version before the recorder changes(2024-06 or lower) results in similar behaviour
This indicated that the behaviour likely not related to the changes made here.
In the end I was able to successfully upgrade by making sure no clock changes could be made and the system would be forced to use the bios clock before starting the upgrade to prevent any change to the system clock and after the expected transition time for the recorder to finish it’s upgrade functionality is restored to normal.
My best guess this happens (or at least my case) due to getting a wrong time during the upgrade. I have no way to prove this sadly and for now just happy to be back up and running with minimal data loss.
Hope it helps someone, much love to all the people working to make HA the amazing tool it is
hey my database is now grow up to 28gb. a backup is 13,6gb big now. iam trying to take the backup to a new HA instance to but its not starting up after restore.
can someone give me a hint how i can find out whats wrong with my HA instance?
I’ve just upgraded from I think 2024.4.3 to 2024.10.4.
Went from 12.2 to 13.2
I had the recorder issue with 2024.7 so restored a proxmox VM backup back then and looks like I’ll have to do the same now as I’m seeing the exact same issue.
All entities are showing live data, but it isn’t getting recorded.
Nothing obvious showing in the logs.
Any ideas where to look to fix this one? because I would really like to keep my system updated.
Edit : I’ve just seen that my database file is 67GB
Is there a database upgrade process that happens on update which prevents data being recorded?
I guess my next question is - not how do I reduce this, but should I move this to an external database? Is it possible to do this migration?
I want to keep the data as a lot of it is useful for me. I have a lot of sensors/entities.
Just wait for it to finish updating. It will probably take a couple of days, if not weeks with 67GB of data. Really depends on how fast your machine is.
Thank you.
I can’t really wait that long as my HA instance has become quite central to the running of the house, so I guess I need to look at purging some data.
Is there any way to offload historic data so it can still be accessed in another way?
(system is running on a Proxmox VM with a Core i7-6700T, 8GB RAM dedicated to the HA VM which could be increased)