Custom component: Long Time State Storage (LTSS) utilizing TimescaleDB

BTW: Another nice side effect for me is that I can now use the Timescale Docker container for both, the HA recorder and the long time state storage (Each with its own database of course).
This saves me the separate Postgress and Influxdb containers and I only have to maintain one container now. :slight_smile:

2 Likes

Would it be possible to take the data that is being logged and back feed it into a sql sensor in order to graph it with the mini graph card for example?

select 		time_bucket('1 day', time)::date as ts,
    		round( cast( (cast(last(state, time) as float) - cast(first(state, time) as float)) as numeric), 4) as kwh
from 		ltss
where 		entity_id = 'sensor.main_totalenergy'
and 		time::date >= cast(date_trunc('month', current_date) as date)
group by 	ts;

This returns date’s and the daily consumption and would love to be able load it back into a graph.

@Tjeerd , this question is not really directly related to the LTSS component but here is my take on it:

  • The SQL sensor is 100% compatible with LTSS but is implemented as a “typical” HA sensor integration (i.e expects a single value on each update and then stores it with the regular recorder instance etc. Not really suitable for this kind of task where you want to show a history of another entity (regardless of db)
  • The mini-graph lovelace card depends on the Home Assistant API. More specifically the history sub-api which is integrated with the regular history/recorder component of HA only. Hence, no luck here unless HA makes the history component a bit more modular (i.e. accepts other storage solutions and not only the regular recorder component).
  • My preferred choice is to use grafana, which makes it really easy to graph things like your example but is a little more complicated to integrate with HA frontend, a nice guide can be found here (just ignore all the talk about influxdb and replace those step with LTSS instead.

Hope this gives some ideas about your possibilities!

@Tjeerd another option is to write a script that is run daily and generates a custom output/plot. See this thread. You might also want to checkout the SMA filter.
Cheers

Thanks for the advice @freol and @robmarkcole !

I went a different route for now. Although Grafana is very flexible, it’s a bit of a pain to get some main line stats back into HA. I’ve setup a SQL Sensor which logs the min / max for the current day. No history, but as it runs it will build it. The value of the sensor is going back into the recorder component and I can use in the mini graph card for now :slight_smile:

1 Like

@CM000n @freol

I’d really appreciate your advice on the following:

I’ve been trying to figure out how to do that as well ( forum link ), so how to best set that up?
(I’m new to both HA and Docker)

Specifically, I’m getting caught up as to where it would be best to either mount a Docker volume or where to bind the Postgres container to:

sudo docker pull timescale/timescaledb-postgis

sudo docker run \
–name chooseContainerName -e \
POSTGRES_PASSWORD=postgresRootUsersPassword \
-v /your/data/dir:/var/lib/postgresql/data \
-p 5432:5432 \
-d timescale/timescaledb-postgis

So for the choice of /your/data/dir :
e.g. would it be better to:
1.) bind it to the same dir as Home Assistant
(in my case generic linux install of Hassio):
-v /usr/share/hassio/homeassistant:/var/lib/postgresql/data
OR
2.) use default Docker volume location:
-v chooseVolumeName:/var/lib/postgresql/data
OR
3.) somewhere else
?

I would be glad to receive any help on this…
Thanks!

I’m using docker compose and storing the timescale data on my btrfs raid.
But I think you can store them wherever you want and have enough space :wink:

  timescale:
    container_name: timescale
    restart: unless-stopped
    image: timescale/timescaledb-postgis:latest
    volumes:
      - /srv/dev-disk-by-id-ata-ST32000542AS_5XW2HFF9/data/docker_data/timescale:/var/lib/postgresql/data
      - /etc/localtime:/etc/localtime:ro
    environment:
      - POSTGRES_USER=myuser
      - POSTGRES_PASSWORD=mypassword
    ports:
      - "5442:5432"
1 Like

Is possible to use this for store states for e.g. 1 year? Will it work without any problem on raspberry pi 3?

Even if it is basically feasible, it is probably not a wise decision to run a relational database to back up long term data on a Raspberry Pi.
You might encounter performance issues.
Here is a very interesting blog article about the Postgress Performance on a Rasberry Pi 3: https://blog.rustprooflabs.com/2019/04/postgresql-pgbench-raspberry-pi

It is also questionable whether the space on a Raspberyy Pi is sufficient for this.
Just to give you a rough idea: I currently have about 350 entities and sensors in Home Assistant being tracked. My LTSS Postgres database has been running since the beginning of the year and already occupies about 400MB:

Thank you for answer. So i need to find different solution for that what i want - store at least min, max, average for each hour for last 3 months, and then for data older than 3 months values for each day for each sensor recorded by recorder. Any idea?

Mhmm, you could perhaps write an SQL script that will take the values to the required level of aggregation and purge the values that are no longer needed.
You could run this automatically, for example with the help of CRON or pg_cron: https://www.citusdata.com/blog/2016/09/09/pgcron-run-periodic-jobs-in-postgres/.

If you want to use another database, in MySQL there is also an event scheduler: https://dev.mysql.com/doc/refman/5.7/en/event-scheduler.html

This way you could keep your database as small as possible.
Just a quick idea…

@CM000n @caear1987

A suggestion is to have a look in the docs for timescaledb, specifically these should be of interest:
https://docs.timescale.com/latest/using-timescaledb/continuous-aggregates
https://docs.timescale.com/latest/using-timescaledb/data-retention

Note: I havent tried these features myself so cant really say much else.

2 Likes

The timescaledb-postgis image you linked in your github doesn’t support raspberry pis/arm7v32, I created my own container here, in case anybody wants to use it.

Sorry, where is your docker container?

Seems like I forgot the link, sorry.

2 Likes

Hi @freol, I just wanted to let you know that since HA version 0.109 I’m getting an error message which is probably caused by ltss:

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/decorators.py", line 20, in _handle_async_response
    await func(hass, connection, msg)
  File "/usr/src/homeassistant/homeassistant/components/frontend/__init__.py", line 550, in websocket_get_translations
    msg.get("config_flow"),
  File "/usr/src/homeassistant/homeassistant/helpers/translation.py", line 329, in async_get_translations
    resources = flatten(resource_func(results[0], components, category))
  File "/usr/src/homeassistant/homeassistant/helpers/translation.py", line 127, in merge_resources
    new_value = translation_strings[component].get(category)
KeyError: 'ltss'

And as a Feature Request: Could you maybe also add HACS support to ltss?

Many thanks in advance and again for your great component
Greetings

Thanks for the heads-up @CM000n . Can you please open an issue in the github repo? (link in the first post)
Fyi, seems to be related to this issue: https://github.com/home-assistant/core/issues/34857

For your feature request, please also put this in the issue tracker on the github repo. Unfortunately I have no previous experience with HACS and dont use it myself. I you feel up for the challenge I am happy to accept a pull request for this.

Hello @freol , thanks for your quick feedback.

Yes, I also think the error message is related to the current changes how translations are handled now.

Regarding HACS support, I just thought it would help ltss to be known to a wider audience and make it easier to track features and fixes.

I don’t know much about the conditions myself, but it seems you’ll have to restructure your repo a bit for that:

I’ll see to it that I create a track for you for both things on GitHub. If you don’t mind, maybe I would really read in and send you a pr for HACS support

Best regards

1 Like

After reading upon this component, I created a home assistant addon with TimeScaleDB to work together with is component.

Anybody interested to test drive?

Here is the repo url: https://github.com/Expaso/hassos-addons

4 Likes

I am using it on 64-bit now and it works. Had it running on 32-bit but the postgis extension was missing there.

However, is there any difference in using “ltts” vs “recorder with postgres directly”?
I’ve migrated from SQLite to timescaledb and thus, my home-assistant is pushing states into postgres already.

Now, as I’ve activated LTSS, I wonder whats different as I can configure entities to store aswell as in the recorder component.
Dont understand me wrong, there might be a usage for this for someone but I’ve just not see it right now. Would love to get feedback from others who are using this addon.

Maybe it was because the recorder component was not optimized until some of the latest HA versions which implemented “FROM”/“TO” limits for the Logbook or History Graph?