Longterm Statistics should be possible without manually installing grafana and influxdb

Being able to keep values for multiple years to see the development and amortisation is a nice thing to have.
Having to fiddle with influxdb and grafana makes this more of a hassle than it should be.
I therefore would like to have a performant db used that is longterm storage capable by default so we don’t need to worry about duplicating values into a second db and getting the values out again to use them for visualisation and more automation and display them properly.

Have you read this through?
Home Assistant Statistics | Home Assistant Data Science Portal (home-assistant.io)

The problem very likely is that all data will grow ha database to gigantic proportions. When you restart ha it needs to reads all the database and the bigger database is the more time it needs to start everything up. Influx db and grafana comes handy to move all that data to another data source to keep ha responsive as possible.
Maybe I’m wrong about this, but this is how I see it. My ha databse is over 11 GB.

LTS (long term statistics) were introduced for exactly this requirement.

Give your sensor entities a state_class and LTS will be recorded. Forever.

There are 5 minute max, min and average records for each entity. These are then aggregated to hourly max, min and average after the purge_keep_days recorder setting (10 days by default). So you have only 24 x 3 measurements for each entity per day. This does not grow the database at any significant rate.

You can use LTS in the core statistics card or third party cards like ApexCharts.

LTS are also now used for the more info pop-up history chart for sensor entities with a state_class to enable faster front end loading times.

1 Like

This is inaccurate. HA doesn’t keep the whole DB in memory.

I understand that ha doesnt keep whole database in memory, but it probably use some part of db when it starts up. From what I saw, the bigger db the longer startup time is.
I saw that some integrations or even some helpers can extend ha startup time but I’m not sure about the rest.

For restoring states of entities, which would make it a function of the number of entities. This in itself wouldn’t be very slow. If it reads history, it’s very likely connected what is displayed on your dashboard. This will make it wildly varying, given the different things people can do. Plus, some of all kinds of SQL sensors. Then there’s the issue of which DB engine is used.

Probably because the DB gets slower with more data, if it’s not optimised with all the right indexes, running maintenance tasks, etc. – not because it loads more of that data.

HA places a limit on how long it will wait for an integration to load during a restart. I think it’s 10 seconds. After that, it will retry it in the background in order to get the service up as quickly as possible.

1 Like

Thank you for your replay. Yeah, I should take a little bit more care of my database. Just a little bit of recorder tweaks and db was shrieked from 11.7 GB to 5.3 GB., more than 6 GB of data than I never use or need.

1 Like