History/Logbook very slow

Synology NAS. 8GB of RAM with a dual core processor.

Did you get any resolution on this? Also running Synology NAS 8GB and history is practically not working. I have erased the whole DB and started over clean, works for 1-2 days then it is so slow i can hardly use it. (dB ~70MB)

Im not sure when this problem started, im now on 0.81.6 I know that on 0.76 it worked (dB ~800MB)

Same problem here, i’m using HASSIO 0.82.1 on a RPI3B+

recorder:
purge_keep_days: 7

Same here Im using Python Virtual Env with HA 82.1 on RPI3. I assumed it was due to having way too much data in the database as every time I delete the database and start from scratch it works for a day or 2 instantly then slows again.

When you restart HA the purge counter resets. So if you are making changes every and restarting HA every day purges simply don’t happen.

You can call the purge service manually.

Using the exclude or include configuration options is also a good way to keep the size under control.

Mysql has a know problem when using innoDb. There’s a long discussion about database size and this specific mysql problem (see the comment from rpitera here Large homeassistant database files). To reclaim the space you have to export the DB, remove, import again! I’m not saying that this is impacting the performance, but doesn’t hurt to be aware of this :-).

If this is about not releasing space for deleted records, wouldn’t a possible option be to setup a partition based on day and purge by dropping partitions?

I have the same problem with a setup that has about 15 devices on a RPI3B+. Seems like that should have more than enough horsepower to retrieve this amount of info from a database and plot it without very much delay.

I have been having this issue on my RPi 3B+ for months now, nothing helps.

1 Like

Same here. I’m running HA in Docker on Synology DS216+II NAS, 4 GB RAM. My HA DB file is only 300 MB big and the logbook is taking ages to load. The NAS Volume during the logbook loading process is used in CA 60%, CPU 15-20%. Usually when Synology’s DB is heavily occupied the CPU is occupied 100%. Here it’s the other way around

And I had these issues on a pi, switched to a nuc and it now loads in under a second loading an entire week which is about 2 gigs of information for me. This is purely a hardware issue with the SQL look ups being processor intensive.

And just to clarify, I sit at 1% to 5% processor use and 2gb (~25%) of ram at any given time.

2 Likes

Yup, if you get into really profiling what’s going on with collectd or some other high frequency system monitoring tool you can see this.

On a i5 NUC when I click on “Logbook” CPU will go to 100% for a short period; and disk IO will also show quite a bump.

On a rPI this just takes longer due to much less CPU and probably worse disk performance as well.

I even tested using the sqlite pointing to /tempfs to remove disk IO from the equation and it was better; but still slow on a rPI.

When I moved to a NUC I ran this way for awhile as lots of RAM before I setup MariaDB for the database back end. It was faster but I wanted to retain the logs through system reboots.

To all, who suffer from this. I can confirm that using an ‘external’ database for the recorder component solves the issue well. At least on a Synology NAS. I installed Maria DB package (didn’t want a dockerized version to maximize performance) in my Synology and configured HA to use it and the performance difference is just tremendous! (or huge as some used to say :wink: )
From ca 40 seconds(!) (and timeouts) to load the Logbook view I went down to 2-3 seconds!! Same for the history graph for any sensor. It’s excellent.

For some reason also the DB size itself is ca 50% smaller. Default SQLite db was ca 300 MB. Maria DB is ca 170 MB. No wonder it’s faster - also for this reason. And most important - CPU usage is a lot higher - about 60-70% now comparing to 5-10% with default SQLite db running in HA Docker container. I think my bottleneck now are disk operations, not CPU, which is ok (I wonder how would this work with SSD).

I think the most important thing is that DB queries are executed outside HA docker process - this lets them to be executed more efficiently and they can take more CPU usage which gives a lot better processing time.

I have no experience with your setup but I run my Maria DB on the same NUC as the Hass and a cctv server and a traccar server. And it is VERY fast. No delays. I have 16 Gb memory and m.2 SSD

Have moved mine to MariaDB now. Full history for 1 day now loads in about 8 seconds, which is a huge improvement.

1 Like

I too have this problem, with a hexacore i7 processor @4.3GHz. I also use an SSD for all my operating systems. That includes Home Assistant.

Been reading up on this, and I want to try MariaDB. My only question is, if so many people have such a performance upgrade when switching to MariaDB, then why isn’t that the default DB in Home Assistant? Just out of curiosity.

2 Likes

Another service to run, another point of failure, more complexity for users who can’t follow instructions…

3 Likes

Sounds logical and it might run better on rpi. Anyways I installed MariaDB and what a blast it is. It was pretty easy to setup (had to find dependencies first though, but those can be found on the recorder docs). It actually loads the page now :P. Most of the time the page would try to load and become stuck for minutes, I tried to exclude many entities to ease the load. But I guess this is no longer needed, I do however have it installed on another host (VM), they are on the same machine but different VM’s (as I intend to use other databases for other purposes).

Thanks for this thread everyone. It even loads on my iphone now, whereas before it would always crash out of the app (iphone X, ios 12.3). Can’t seem to get the bars to show up though might be a theme issue, I see the graphs though. And the log can finally be loaded on my iphone (and fast). Great!

Edit: tried the standard theme but the bars still don’t show anyone might have ideas? There is definitely something there as you will see in the screenshots, weird thing is, I can see graphs in the history but not the bars showing on/off states.

Oh well, at least it doesn’t crash my iphone anymore, I can read the log (without the app hanging). Better this than unusable at all :stuck_out_tongue:

I still have that problem with an external mariadb database. It has 500MB size now and any graph in hass takes often 5-10 seconds to load!
It mostly is acceptable at the moment but getting slower each day. Soon I will have to purge the db again or only save 5 days so everything stays responsive.
My machine is an odroid HC2, HDD over SATA, 8 cores, 2GB RAM.

If there won’t be a solution to this, logging sensor values outside of hass DB might be an option as I get most of my values through mqtt but not all values…
I’d really like to be able to look back 1.5 years, not only 10 days.

how many days are you viewing on screen? I’m only using a Rpi3 but when set to view 1 day at a time it only takes a few seconds to show history.