Custom component: Long Time State Storage (LTSS) utilizing TimescaleDB

The timescaledb-postgis image you linked in your github doesn’t support raspberry pis/arm7v32, I created my own container here, in case anybody wants to use it.

Sorry, where is your docker container?

Seems like I forgot the link, sorry.

2 Likes

Hi @freol, I just wanted to let you know that since HA version 0.109 I’m getting an error message which is probably caused by ltss:

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/decorators.py", line 20, in _handle_async_response
    await func(hass, connection, msg)
  File "/usr/src/homeassistant/homeassistant/components/frontend/__init__.py", line 550, in websocket_get_translations
    msg.get("config_flow"),
  File "/usr/src/homeassistant/homeassistant/helpers/translation.py", line 329, in async_get_translations
    resources = flatten(resource_func(results[0], components, category))
  File "/usr/src/homeassistant/homeassistant/helpers/translation.py", line 127, in merge_resources
    new_value = translation_strings[component].get(category)
KeyError: 'ltss'

And as a Feature Request: Could you maybe also add HACS support to ltss?

Many thanks in advance and again for your great component
Greetings

Thanks for the heads-up @CM000n . Can you please open an issue in the github repo? (link in the first post)
Fyi, seems to be related to this issue: https://github.com/home-assistant/core/issues/34857

For your feature request, please also put this in the issue tracker on the github repo. Unfortunately I have no previous experience with HACS and dont use it myself. I you feel up for the challenge I am happy to accept a pull request for this.

Hello @freol , thanks for your quick feedback.

Yes, I also think the error message is related to the current changes how translations are handled now.

Regarding HACS support, I just thought it would help ltss to be known to a wider audience and make it easier to track features and fixes.

I don’t know much about the conditions myself, but it seems you’ll have to restructure your repo a bit for that:

I’ll see to it that I create a track for you for both things on GitHub. If you don’t mind, maybe I would really read in and send you a pr for HACS support

Best regards

1 Like

After reading upon this component, I created a home assistant addon with TimeScaleDB to work together with is component.

Anybody interested to test drive?

Here is the repo url: https://github.com/Expaso/hassos-addons

4 Likes

I am using it on 64-bit now and it works. Had it running on 32-bit but the postgis extension was missing there.

However, is there any difference in using “ltts” vs “recorder with postgres directly”?
I’ve migrated from SQLite to timescaledb and thus, my home-assistant is pushing states into postgres already.

Now, as I’ve activated LTSS, I wonder whats different as I can configure entities to store aswell as in the recorder component.
Dont understand me wrong, there might be a usage for this for someone but I’ve just not see it right now. Would love to get feedback from others who are using this addon.

Maybe it was because the recorder component was not optimized until some of the latest HA versions which implemented “FROM”/“TO” limits for the Logbook or History Graph?

@boesing
The main differences between LTSS and the regular recorder component with Postgres as backend would be the following:

  • The recorder component uses “vanilla” Postgres whereas LTSS makes use of two extensions, TimescaleDB and PostGIS. TimescaleDB brings advanced timeseries handling/analytics functionality to Postgres and PostGIS enables GIS-powered data types and queries.
  • The recorder component stores both HA states and HA events whereas LTSS stores only HA states, reducing the size of the database significantly.
  • The recorder component stores state attributes as JSON-encoded text strings whereas LTSS makes use of the Postgres-specific JSONB type (a binary JSON representation) which increase performance and reduce database size further.

In general, the recorder component is well integrated with HA and should be your goto-choice if you only want to see your sensor readings from the last couple of weeks/months using Logbook/History Graph. LTSS is tuned for efficiently storing only state data (sensor readings) for long time periods (i.e. no purging of data after a couple of weeks/months) with better query performance and more advanced analytics functionality included.

Personally I run both the recorder component (using Postgres as a backend) with a short retention time (1 week) AND LTSS to make sure I keep my sensor readings “forever” in a space-efficient manner.

I hope this answers your questions!

1 Like

Thank you for LTTS!

According to best practice, a chunk should not be larger than 25% of the available RAM. Since I run TimescaleDB on a Pi4 2GB, I have allocated it a maximum of 512MB. Of that 25%, 128MB would be the maximum a chunk can have. But since I generate 150MB per day (>300MB for sure in the future), I set the interval to 12h ( SELECT set_chunk_time_interval(‘ltss’, 432000000); ).

However, this only affects the next chunk. The current one with 30 days is still running. The only way to shrink an existing chunk is, in my opinion, to export the database, recreate the database (with chunk interval set to 12h) and import the data again.

But how can I tell LTTS to create the database with 12h interval?

Update:
For me locally I have adapted the __init__.py.
Line 311:
from “chunk_time_interval => interval ‘1 month’,
to “chunk_time_interval => interval ‘12 hours’,”.

Previously disabled ltss in configuration.yaml, then restart HA, then deleted and recreated database, then re-enabled ltss (after code change), then restart HA.

Worked great (also the previous backup via csv and afterwards import). But maybe it’s an idea to set the interval as a configurable value? I think 30 days might be too much for many.

@bingo5
Thanks for the idea, and I am glad that you find LTSS useful.

I opened an issue (https://github.com/freol35241/ltss/issues/19) to keep track of this idea. Feel free to contribute!

I must say I am a bit surprised that you manage to generate some 150-300MB of sensor data daily, that must amount to a fair amount of sensors in your HA setup? In comparison, my setup generates ~300MB per month, hence I have had no problem with the default 1 month chunk interval.

Stupid question maybe but I see that ltss component creates a database table called ‘ltss’ in the database you specify. Does this means you can use the same database as the recorder database?
For example homeassistant with this TimeScaleDB add-on config?

databases:
  - homeassistant
  - dmsrreader
timescale_enabled:
  - homeassistant
timescaledb:
  telemetry: basic
  maxmemory: 512MB
  maxcpus: '4'
max_connections: 20
system_packages: []
init_commands: []

@rdnzl
Simple answer: Yes!

Longer answer: Yes, provided that the database has the extensions required by LTSS installed (timescaledb and postgis). I am assuming you are talking about this add-on (Home Assistant Add-on: PostgreSQL + TimescaleDB) in which case both the timescaledb and postgis extensions are provided in the package and hence should work out of the box.

Nice, thanks I like to keep it simple! Next up getting Grafana to use it. Thanks for your great work and this reply!

ltss
(map)(Required) 
Enables the recorder integration. Only allowed once.

    db_url
    (string)(Required)
    The URL that points to your database.

    exclude
    (map)(Optional)
    Configure which integrations should be excluded from recordings.

        domains
        (list)(Optional)
        The list of domains to be excluded from recordings.

        entities
        (list)(Optional)
        The list of entity ids to be excluded from recordings.

    include
    (map)(Optional)
    Configure which integrations should be included in recordings. If set, all other entities will not be recorded.

        domains
        (list)(Optional)
        The list of domains to be included in the recordings.

        entities
        (list)(Optional)
        The list of entity ids to be included in the recordings.

My LTTS
sensors DB is made and ready

ltts
     db_url: postgresql://postgresuser:password@7cd0051d-timescaledb/sensors

    include
         
       entities
         -  sensor.temp_kamer_0x00158d0004184119_humidity
         -  sensor.temp_kamer_0x00158d0004184119_pressure
         -  sensor.temp_kamer_0x00158d0004184119_temperature

Is not working, probaly something wrong ?

@PeetsHome
Have you looked at the example in the README?

  ltss:
      db_url: postgresql://USER:PASSWORD@HOST_ADRESS/DB_NAME
      include:
          domains:
          - sensor
          entities:
          - person.john_doe

If your config above is word-by-word what you use it seems to have several problems:

  • it doesnt look like valid YAML? (missing colons in some places)
  • should be ltss not ltts for the component identifier

I wrote this message in the hospital while I was waiting for my mother. I typed this on my mobile. I am now going to test in my HA vm.

Ok it is working now
I now have a table called sensors, with the pgAdmin4 i can view the data.
To display this in grafana i made a successful connection.

To display my temperature/humidty sensor i made query ( with the help from grafana)

SELECT
  $__timeGroupAlias("time",$__interval),
  state AS metric,
  max(id) AS "id"
FROM ltss
WHERE
  $__timeFilter("time") AND
  entity_id = 'sensor.thgn122_123_thgn132_thgr122_228_238_268_58_01_temperature'
GROUP BY 1,2
ORDER BY 1,2

But i do not get the result what i want, is see the the values but not in time but with the id counter

You don’t need to use any “id” in your Grafana query, but you have to cast your state values to a number data format!
By default they are stored as Varchar for compatibility reasons.
So your query should look more like this:

SELECT
  $__timeGroupAlias("time",$__interval,previous)
  , 'YourTemperatureSensor'
  , MAX(state::FLOAT) AS value
FROM ltss
WHERE
  $__timeFilter("time") 
  AND entity_id = 'sensor.thgn122_123_thgn132_thgr122_228_238_268_58_01_temperature'
GROUP BY 1,2
ORDER BY 1,2

I followed your recommendations for designing a query and could make graphs in Grafana. Thank you for your advice!
However I am now confronted with the problem that a graph is not made in case of unavailable data points. My problem can be made clear by the following example:
In the next query made in PGAdmin the datapoint nr. 1513 is unavailable:

When I make a graph of the last 4 days in Grafana (Query options, Relative time: 4d) the following message is shown: pq: invalid input syntax for type double precision: unavailable. When I choose for the last 3 days the graph is made successfully.
How can I correct this?