Home Assistant Add-on: PostgreSQL + TimescaleDB

I’m running on a RPI 3 with an extra USB-HD. Everything works smoothly except the pgadmin4

I have just released version 3.0.1. Can you try with that one please?

Please upgrade the addon to 3.0.1 and try that one. I have upgraded pgAdmin to the latest 8.2.

No more warnings, thank you! :partying_face:

1 Like

Amazing Add-ons! I had postgres, pgadmin and ltss running quite quickly and I am now going through the documentation and that thread to figure out how to install postgis and how it does work, make sure backups are working, connect to Grafana and draft a 1st dashboard…
Question - Has anyone already documented a way to migrate some years long influxdb records to that postgress DB?
I’d love not to have my actual Long time state storage split in 2 separate DBs.
Thank you!

Hi @cben0ist ,

Yeah, some people have successfully used Outflux to port the data over to timescale: GitHub - timescale/outflux: Export data from InfluxDB to TimescaleDB

I know of this documented case: Migrating from InfluxDB to TimescaleDB on a Raspberry Pi - Share your Projects! - Home Assistant Community (home-assistant.io)

Hi.

Has anyone actually try to restore data dumped from this into e.g. a locally running postgres server?

I am having huge problems with that. All I wanted was running a local DB in a docker, and restore the dumped backup, but I am unable to find a suitable docker image with timescaledb+postgis and without that, the restore fails. I was hoping that I could use the addon image, but without success - it fails on start up as it’s expecting HA API.

Thanks!

(author here) Perhaps I can assist.
Can I send you a discord invite so we could discuss your usecase?

Sure, thanks!

Btw. my goal is simple. I want to validate that I can recover from a DB failure based on my DB dumps I do every day.

I found out that it’s harder than I thought due to requiring both timescaledb+postgis. From the lack of any mention in this thread I suspect not many people have actually done this.

You can reach me on this discord server:

Just grab a channel.

Update: Expaso added a so-called standalone mode, making my manual script handling unnecessary. Thanks a lot!

Hi! With the @expaso 's help, I managed to recover the data fine and documented it in this repository: GitHub - hnykda/backup-restore-ha-timescaledb: A repository with tools and examples to backup, validate and restore postgresql+timescaledb with LTSS for HA . Thanks @expaso !

1 Like

Addon Version 4 has arrived, finally!! :partying_face:

This release contains the long requested and awaited timescaledb-toolkit!

If you use the timescaledb-extras scripts, please apply them again after upgrade, because the upgrade procedure in the addon has to remove some objects that are otherwise blocking the upgrade of TimescaleDb.

If you have any questions of difficulties with the upgrade, please let me know. I will be happy to assist.

Also: I take feature-requests seriously. So make sure you create a ticket for them, on github ( Issues · Expaso/hassos-addon-timescaledb (github.com)).

Enjoy!

Kind regards, Hans

2 Likes

Fantastic work Expaso, truly appretiated! Im giving it a swirl for the first time today.

Everything went smooth setting it up, apart from that I first attempted to create a new user (homeassistant) and provided it regular access (read/write) only in postgres. This resulted in the recorder not being able to load properly, so I reverted to the postgres admin user. All good after that, but not sure if it was a missing priviledge in postgres or some config in HA.

Anyways, proceed testing :slight_smile:

Few questions:

Today I run the SQLite database and only keep three days but also scrape with Prometheus due to the nature of longer term stats I want to retain and keep the native HA DB down in size.

With this I could leverage PSQL+TSDB and replace both SQLite and decouple my Prometheus from this? I won’t be getting ride of Prometheus as I use it for other monitoring efforts in the home ecosystem.

Is there any good documented procedure to “slurp” in all the existing SQLite records into this PSQL+TSDB model?

I read about Long Time state storage from an HACS add-on and is this required anymore? Wouldn’t HA take care of this natively for long term stats from the SQL stuff they do natively or is this required still?

Thanks!

Hi @pyrodex!

This addons, in combination with LTSS replaces the SQLLite database entirely.

The SQLLite database is used by the home-assistant recorder, which you can give a connection-string to the TimescaleDB addon.
The recorder records say 3 days of detailed data, that is now going into your SQLLite, but then goes into Porstgres (this addon).

Next to that, you install LTSS (which is literally a beefed-up version of the recorder component) and post it ALSO to a secondary database of this addon.
Now, LTSS does not delete data. So all your detailed data that goes into the recorder, also goes into LTSS.

But… Within the timescale-addon, you process that data by means of compression and/or down-sampling (continues aggregates) of this detailed data so you can do fast statistics on it.

Personally, I run this addon for almost 4 years now in my own home setup on a raspberry pi 4. the size of my database is like 53GB, and I have NEVER deleted any data, nor do I have any performance issues. My dashboards are lightning fast, all thanks to TimescaleDb and it’s compresson + aggregate features.

Edit: to answer your question: Yes, there is small overlap with HA’s own Long Term Statistics, but these statistics are accumulated in a different way and are locked into HA. With this addon, you can take data from any source, and combine it with SQL. This is needed if you want to build your own custom dashboards.

Note that since you are replacing SQLLite, ALL of HA’s data is also stored in a postgresql database within the addon. Here you’ll see that HA’s Long Term statistics are just tables within my addon:

And here is the documentation of it:

Long- and short-term statistics | Home Assistant Data Science Portal (home-assistant.io)

Oh yeah, use pgLoader to migrate existing data if you like: SQLite to Postgres — pgloader 3.6.9 documentation

All in all: It all works beautifully together complementaring eachother. This is the real power of home-assistant, right here :wink:

Just thinking: Could it be that your user did not have the rights to create or alter the schema? You could try to add those rights, and try again…

@Expaso Is the normal HA data optimized with this addon or do you need to use LTSS to benefit from it?

I.e. without any extra configuration (other than installation) will this be more performant (moar speed and less size) than for example MariaDB?

Does anyone migrated from MySQL to Postgres? I’ve seen there is the pgloader app that can be installed, but maybe there is an easier solution.

Otherwise i would deploy a additional postgres server with pgloader, backup and restore to this Add-on.

Hi Expanso

I have an HA install with TimescaleDB Addon v 2.1.1 .

My system tells me I can upgrade to 4.0.1 - great. But when I tried this I got a message saying my DB was incompatible.

This could have been made trickier by lack of disk space. I eventually got it
back to the same place

I’ve how upgraded to a bigger disk, but I don’t want to risk pressing the upgrade
button in HA in case it blows up again. Obviously, I could backup, try, restore -
but even that feels risky

Do you thnk it can upgrade from 2.1.1 to 4.0.1? If not, what is the procedure?

Thanks

Martin Green

A question… Which tables I should clear while excluding entities which do not make sense to store?

Im just in beginning of this process as noticed that my DB getting too large with all nonsense :slight_smile:

This is my current configuration:

recorder:
  db_url: !secret psql_string
  db_retry_wait: 15 # Wait 15 seconds before retrying
  exclude:
    domains:
      - automation
      - updater
    entity_globs:
      - sensor.weather_*
      - sensor.*rssi
      - sensor.*_voltage
    entities:
      - sun.sun # Don't record sun data
      - sensor.last_boot # Comes from 'systemmonitor' sensor platform
      - sensor.date
    event_types:
      - call_service # Don't record service calls

## LTSS
ltss:
  db_url: postgresql://xxxxxxxxxx:xxxxxxxxxxxxxxxxx@77b2833f-timescaledb/homeassistant
  include:
    domains:
    - sensor
  exclude:
    domains:
      - automation
      - updater
    entity_globs:
      - sensor.weather_*
      - sensor.*rssi
      - sensor.*_voltage

While doing delete to existing entries, which tables I need to clear?

Currently deleting from this table:

DELETE FROM public.ltss
WHERE entity_id like 'sensor.%_voltage';

Then how about these as there are nice chuck of data?

image