Charts loading is slow - Is it worth migrating recorder to PostgreSQL?

Hi there,

I noticed this happening more and more often:

I have a few tabs with a lot of graphs (default widget from Lovelace) and it tends to take a few seconds to show all the data.

Home Assistant is visualized on Proxmox on a NUC12 with 64GB / Enterprise NVMe / Ceph RBD.

SQLite DB is around 1.3GB.

What could I do to speed up SQLite? How painful is it to migrate to a different recorder while keeping historical data?

Thanks,

G.

I checked CPU usage and while loading the tab with the most charts, I noticed the CPU was hit hard.

Are you sure it’s SQLite? Passing a lot of state and historical data to the front end is heavy for HA to begin with.

BTW, there’s been so many DB improvements over the past couple of years that I’m considering to migrate back from PG to SQLite.

If you migrate to PG, you’ll need to migrate data manually.

I’m not sure it is related to SQLite to be honest. Just trying to find a solution to avoid this. Are there any ways of caching this data?

There’s caching of assets (like images), but state data is pretty much real-time from what I understand. I’m not sure about all the mechanisms involved here.

Are those all built-in cards? Perhaps reduce the number of cards as a test to see if it improves performance.

Also check if there’s a particular integration keeping your system busy.

Memory usage is very high, especially for 8GB allocation. But this might be how Proxmox allocate it to VM rather than real usage within VM. I run my HA on ESX with 4 cores and 4 GB of RAM and while all RAM is marked by host as used, only 30% is active. I have quite large instance with 3500+ entities.
I have also some very complex dashboards similar or more complex to your example and one with 43 monographs holding data for 24h with 30min resolution loads under the second. The most complex one I have (sort of standard history dashboard) that holds 67 entities (with hours to 10 seconds data resolution) and 1 month of history loads under 6 seconds. Loading such dashboard gives no stress at all on backend (HA) or browser. And I have 10GB+ MariaDB running in docker on remote NAS, and the load is heavy there, jumping from less than 1% on idling to ~60% average (within 5s resolution window) on this most complex dashboard load. What is important is that DB runs on SSD (make huge difference going from 2~3 minutes on spindles to ~30s, but you are already there) and has heavily optimised DB parameters, which is not possible for SQLite (going from 30s to 5s). On the other hand, as it was already mentioned, SQLite is now also optimised by developers and many people on this forum advised even to migrate back to SQLite from other solutions.
Migration itself is not very complex (at least for MariaDB, not sure about PostgeSQL). It is just adding one line to configuration.yaml to point to new recorder and copying data from old DB to new one (I used HeidiSQL) and it took me ~4hours, due to amount of data and doing migration as backup/restore process, rather than direct data copy, which would be even faster.

Those are built-in cards, I’m trying to stay away from too heavy customisation to avoid issues while upgrading HA.

Nothing obvious. My integrations are device/related, nothing cosmetic.

This is perfectly fine, HA tends to use as much memory as you give it. Inside HA, the memory consumption is fairly low.

HA is currently on CephRBD with my 3x NUC12 and DC2000B enterprise NVMes. It uses a Thunderbolt 4 network ring dedicated to Ceph (tested at 26Gbps). Latency does not seem to be an issue, commits are under 1ms for Ceph.

From what I read, the main issue is to migrate existing data. But I might give a try one of those days, just backing up the VM and try to move to PostgreSQL to see if it changes something or not.

Perhaps I was not clear in this point. I used HeidiSQL to copy existing SQLite data to local PC and then restore this data to MariaDB running in docker. Then I redirected HA via config file to use MariaDB instead of SQLite. No data loss, just short break for migration time when HA was shut down. As you can leave old data I left intact, rollback in case of failure is as simple as removing one line from config and HA restart…