I hasten to add that I’m not talking about 10 GB. I’m not even talking about 50GB. I’m currently sitting in the vicinity of 80GB.
(And I should say that I’ve already pruned heavy contributors and too-frequent updaters from the entities I’m storing. I think it’s just because I have closing-in-on 3,000 entities. Of those, the biggest contributor takes up around 3.5% of the space and only six of them more than 1%, so I think the gains from adjusting my recorder settings are pretty much used up.)
I currently am using a remote MySQL 8.2 container to host the recorded database (I have HA running on my cluster with its config directory mounted over NFS from the NAS, so SQLite isn’t an option and avoiding tying the DB to any given cluster node is preferable).
But this is starting to creak. While HA performs well automation-wise and in executing actions, fetching data to display on the dashboard, particularly for any history graphs or other data-over-time types of display has become very slow indeed.
I’m looking for advice on how to optimize this. Would Postgres be better at handling a database of this size, or this type of read? Is there anything else I’ve overlooked that would improve performance in this sort of scenario? Anyone else out there with a really big database and advice to offer?
All thoughts much appreciated!