Recorder ended unfinished session

Some days ago I updated to 0.107.1 AND enabled influxdb.
Now I see that the history stops working a while after starting HA with the following log message:

(Recorder) [homeassistant.components.recorder] Ended unfinished session

I’ve found in some threads that migrating to mariadb might help. But what causes the problem and will a migration with history loss really help?

1 Like

The only time I’ve ever seen that message is when homeassistant hasn’t cleanly shut down (so like a host system reboot without stopping HA first).

I see it regularly if I call host ha reboot and then (when it’s up) do it again.
It used to appear more often, now I can see it only occasionally.
I’m still using the standard DB engine so have no idea if the migration will help.

I won’t care so much about the message, but sometimes the history stops working.
Here is another log entry I saw today - but history/recorder still working:
ERROR (SyncWorker_11) [homeassistant.components.recorder.util] Error executing query: (sqlite3.OperationalError) database is locked

When I do a reboot of HA, history typically is okay, but than after a while, I find those messages in the log…

It may be something to do with the media your DB is on (what is it?) - I saw it frequently on SD card with a lot of entities enabled in recorder.

Yes, I’m using an SD card on a RPi.
How to use something else? Would an USB stick be better?

Lot of entities: Hhm, how much is a lot of?
If I check all on the entities configuration site, it shows 60.

I think people have better luck with SSD via USB, search the forum.
Well, it’s not only how many entities you have but also about how ofter they generate data.
My 7 temperature sensors send humidity, temperature, battery level and update_time data every 50 seconds. And it killed at least one card already :frowning:

Thanks Ahmad, my database file size is about 250 MB - with a 7 day purge.

How would I check the root cause of the database problem?
How would I check, if the sd card has a problem?

Buy a ssd instead of check the cheap sd card - and than maybe the same problem…

You can start by testing the speed of your existing SD Card… See the following documentation.
I have migrated from the HA database to Maria DB and have less problems … Mine is about 400 Mb (7 days purge)… I do recommend to optimize it on a regular basis (to keep the size acceptable).

Optimizing the database: here is my shell for MariaDB (the first and the last SQL statement are calculating the size of the database before and after the optimization (you will have some error messages like “Table does not support optimize, doing recreate + analyze instead” but it is working):

echo 'SELECT table_name AS "Table", ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)" FROM information_schema.TABLES WHERE table_schema = "hass" ORDER BY (data_length + index_length) DESC;' > /home/homeassistant/.homeassistant/optimize_database.txt
echo 'OPTIMIZE TABLE events;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'SHOW WARNINGS;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'OPTIMIZE TABLE states;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'SHOW WARNINGS;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'OPTIMIZE TABLE recorder_runs;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'SHOW WARNINGS;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'OPTIMIZE TABLE schema_changes;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'SHOW WARNINGS;' >> /home/homeassistant/.homeassistant/optimize_database.txt
echo 'SELECT table_name AS "Table", ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)" FROM information_schema.TABLES WHERE table_schema = "hass" ORDER BY (data_length + index_length) DESC;' >> /home/homeassistant/.homeassistant/optimize_database.txt

mysql -v --user=<myuser> --password=<my password> --database=hass < /home/homeassistant/.homeassistant/optimize_database.txt

As I said above, imho size alone is not the major issue that impacts SD card lifespan.
Much worse when you have a few entities that change state frequently as it all creates additional writes to the DB.

@AhmadK Agree with you… The size is not a major issue if the database is not disorganized (this is mainly depending how fast the database is growing) but with time this disorganization occurs and the performance is decreasing and a reorg is necessary…
I have at least 4 entities (out of 430 entities) writing in the database every second without problems… but I am reorganizing the database every day on the fly with the shell given in my previous post (so no need to shutdown HA) to keep it organized, “small” and less fragmented…
Testing the SD card read/write speed can detect a problem on this side too… Available memory could be also an issue for database problem.
Now regarding SD card lifespan, they are not really designed for intensive writes…

adding it all together. reorganising speeds up your database but it adds to the amount of writes. so that “without problems” can end any minute.
I’m just saying if one is using a SD card, it’s either restrict your recorder to a small amount of entities and limit their activity OR keep a fresh backup/snapshot and a new SD card ready (and don’t run any critical tasks on your HA).

@AhmadK Totally agree… I have a backup SD card ready (and I am swapping from one to the other on regular basis), doing regular backup on NAS (nearly daily) and another PI ready as well (I am using it as test environment or machine to build new release of the OS)… I am waiting the release from Raspberry Foundation of the capability to boot from an SSD on Raspberry pi 4… It will be way more faster and surely more reliable… In the meantime, I am slightly at risk… :slight_smile:

Beside “is it okay to run on a SD card…”

How to figure out, what causes the log entries?
Raise log level? If yes, how…
Other ideas

ERROR (SyncWorker_3) [homeassistant.components.recorder.util] Error executing query: (sqlite3.OperationalError) database is locked
(Background on this error at: http://sqlalche.me/e/e3q8)

The Background link does not hel (me)…

To understand where the locks are coming from, it is possible to run some queries on relational database (I did some research on Google but this is very specialized and need some competencies that I do not have to understand and interpret the output)… Maybe an RDBMs expert could help you…

As the problem occurred after adding influx DB and migrating to 0.107… Try to stop Influx DB and see if the problem remains… I am not using Influx DB so I have no idea of the impact on the performance of HA and its database… Migrating to Maria DB is also a solution that I found on other posts having the same issue and it fixed it for them… Migrating from SD to SSD (if possible… For Raspberry pi4 you will have to wait the availability of booting from SSD, but this is “easy” on Raspberry pi 3) would definitively help also for speeding up your system, improve reliability and consequently reduce locks on your database…

Another approach is to disable all entities in recorder and just add the one you really need to record, this will reduce the number of writes and the size of your database… and consequently to avoid to have dead locks on your DB…