Strange Behaviour: Template sensor not being logged in recorder/database. "Show More" also odd

petro - IT IS NOT occurring on my other VM’s

A new VM with only two entities is fine

The VM with the 40 climate entities (production system) and the hundreds of templates is not

Then it’s your original installation that’s the problem…

I “am” running default_config

From what I’ve read is that you created a new VM that exhibits the same issue as your previous one. Yes or no?

Check Settings → Logs for errors.

I do have to work out how to fix it though…

Its 50000+++ lines of code… The lovelace itself is that much

nothing in the logs

I just read your message on discord, so you’re small test does not have the problem. If that’s the case, I’d try to restore a backup from your ‘non-working’ version into a fresh install on a separate VM.

Alr done - still faulty

Then just restore the configuration files, it’ll have to be manual process.

As you say - im gong to have to build it from scratch one file at a time until it breaks

in the end I either find an issue or it works… either way I get a working system

Petro - thank you for your patience. I will get bk to you in the next couple of days in this thread with the result

1 Like

So Ive done a lot of research further to this - I have rebuilt everything on bare metal and not in a VM and everything works. Been running for a week with close to 100 reboots and 4-5 power cycles (due to development while at the same time trying to diagnose this issue) and no issues so far. Recording works perfectly.

I can confirm 100% that a single VM that runs for days without disturbance or power cycling will “record” perfectly with data being logged at 1second intervals. After 4 days, I cycled the power on the VM 2x - corruption occurred and is not possible to access the 4 days prior. I interrogated the db with heidi and found a crazy number of “nulls” after the suspected “corruption” where I would have expected to have seen entityid’s (Im an amateur at relational databases so this is in no way from expertise)

From my research TBH I think the database is far more fragile under Virtualbox compared to bare metal. Possibly because when a power off is initiated from VBox there is a distinct “chance” VBox does not send the correct shutdown signal (ACPI) to HA or HA does not recognize it properly (anecdotal but tested). Understandably. I get it. Power Off is POWER OFF

However from what Ive experienced over the hast 2 weeks of focusing close to 60 HOURS to find a cause/solution to this issue I dont hold much faith in an ungraceful power outage w HA in either a VM or bare metal although base metal appears to be more reliable (touch wood)

“Back in the day” (Ive been using HA since 0.72) HA used to be bulletproof and with multiple power reboots (back when we used to use pi 3’s and would just pull the power) I never had a problem and with anything prior to 2023.x.x. But with the changes in the db that occurred in the early 2023 releases to do with writes to SD cards, Im not so sure “ungraceful” shutdowns are tolerated as well as they used to be and HA is not as bulletproof as earlier releases.

Anyway - this is just a conclusion to my research. Im on a dedicated machine now and have just carried over all the yaml files from the VM to the new machine and all is working well.

Thought Id just let you know the outcome.