Home Assistant Add-on: PostgreSQL + TimescaleDB

Hi Martin!

Thank you so much!

Yes you can delete the homeassistant database one if you’re not using it.
Don’t forget to remove it from the config also when you do.

Another way is to move your home assistant database from sqllite to timescale :wink:

1 Like

This looks like a great alternative to the sqlite DB, thanks for creating it!

I just did a new install of both the TimescaleDB and and pgAdmin4 addons on my HassOS VM, and have a couple of questions about what I see in the logs:

[12:42:04] INFO: Enabling Timescale Extension for database: 'homeassistant'
NOTICE:  extension "timescaledb" already exists, skipping
CREATE EXTENSION
[12:42:04] INFO: Upgrading PostGIS for database: 'homeassistant'
2022-10-05 16:42:04.800 UTC [529] ERROR:  function postgis_extensions_upgrade() does not exist at character 8
2022-10-05 16:42:04.800 UTC [529] HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
2022-10-05 16:42:04.800 UTC [529] STATEMENT:  SELECT PostGIS_Extensions_Upgrade();
ERROR:  function postgis_extensions_upgrade() does not exist
LINE 1: SELECT PostGIS_Extensions_Upgrade();
               ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
[12:42:04] INFO: done
[12:42:04] NOTICE: TimescaleDb is running!
2022-10-05 16:43:00.133 UTC [551] LOG:  the "timescaledb" extension is not up-to-date
2022-10-05 16:43:00.133 UTC [551] HINT:  The most up-to-date version is 2.8.0, the installed version is 2.6.0.
2022-10-05 16:55:46.700 UTC [784] FATAL:  password authentication failed for user "postgres"
2022-10-05 16:55:46.700 UTC [784] DETAIL:  Connection matched pg_hba.conf line 99: "host    all             all             0.0.0.0/0               md5"

So, the questions I’m hoping you can help me with are:

password authentication failed for user "postgres"

This happened as soon as I changed the postgres password (by right-clicking the login in pgAdmin4).
Is there somewhere in a config I need to enter the new password?

The most up-to-date version is 2.8.0, the installed version is 2.6.0

Is the extension not up to date, or is this just a dependency which hasn’t been bumped yet?

function postgis_extensions_upgrade() does not exist

Are these error messages related to PostGIS significant?


Thanks for any help!

Hi Expaso, thanks for the feedback.

I have noticed that running TimescaleDB, with only 1 empty database configured, is triggering a lot of disk activity. Seems like 11MB/s.

Any idea what is causing high disk use?

(Running SSD trim command with TimescaleDB either enabled or disabled reveals a lot of data to be trimmed when TimescaleDB is running)

Hi Martin,

I haven’t noticed this earlier… Can you spot on what files in the container this might be?

Hi Expaso,
I don’t think my question relates to this fine addon however - maybe I can get a kick in the right direction. Recently I moved to HAOS on a Odroid N2+ - coming from Synology Nas. I have a 750MB database.bak (from a Teslamate database) - and I don’t know how to restore into Postgres database on my Odroid. The root of my problem is that I don’t know where to locate pgadmin root folder so I can put my .bak file that can be read by PGadmin. Can you - or maybe anyone else - help me?
Kind regards
Anders

I get this erros lately from timescaledb addon and the memory slowly rises:

2022-11-06 17:41:02.599 UTC [1300] ERROR:  out of shared memory
2022-11-06 17:41:02.599 UTC [1300] HINT:  You might need to increase max_locks_per_transaction.

any idea how to fix it? can I add max_locks_per_transaction: xx in the addon configuration? and if so what value should I use?

Hi @askpet ,

The best way is to take a backup of the timescaleDB addon using the normal home-assistant tooling, and restore that backup on your new odroid.

The backup you have taken from within the pgAdmin addon, will probably be stored inside the pgAdmin container. This data will be gone after a restart of the addon, unless you store it in the /share folder.

This /share folder can be accessed from outside of the addon. For instance by the SAMBA addon, so you can access it as a network share, or SSH if you prefer.

[EDIT] Updated comment to recoomend the share-folder

He @drimpart ,

What haredware are you using? (and how much memory do you have?)

Hey @Expaso, thanks for getting back to me! My hardware is quite powerful, that’s why it has right now no real influence on the rest of home assistant:

Intel NUC 11 - Core i5 - 16GB RAM
Generic x86-64 / HA OS

I was just wondering, because I found other posts here, that people seeing memory leaks lately. I made template sensors for all my addons and saw that the only addon with memory rising is the timescaledb addon:

  - platform: command_line
    name: TimescaleDB CPU
    command: 'curl -sSL -H "Authorization: Bearer $SUPERVISOR_TOKEN" http://supervisor/addons/77b2833f_timescaledb/stats'
    unit_of_measurement: '%'
    value_template: "{{ (value_json.data.cpu_percent|float/4)|round(2) }}"
    scan_interval: 60
    json_attributes:
      - data
  - platform: template
    sensors:
      timescale_memory:
        friendly_name: "TimescaleDB RAM"
        value_template: "{{ (state_attr('sensor.timescaledb_cpu', 'data')|to_json|from_json).memory_percent }}"
        unit_of_measurement: '%'

I got this NUC recently around two weeks ago and installed everything completely fresh, so I was a bit nervous when I saw that the memory of the system was rising. Right now, it´s just 1 or 2 per cent in 24 hours, so I can’t really say how it is in the long term; maybe that is just normal behaviour for a running timescaledb and gets memory free after a few days?

I tried a couple of things, like adding a higher value at max_locks_per_transaction: in the addon options or turned off telemetry, but memory is still rising. I have to say that I have max_connections: 80 because I have a couple of SQL sensors and Grafana dashboards, but I guess with my setup that should be no problem? In my ltss integration I just have 42 entities included. Also I have two other databases running in this addon that gets updated from another machine once a day without a problem

SCREENSHOT-2022-11-10 at 21-08-58

Love this addon, feels much better than using a sqlite db.

However, today I upgraded to 2.1.0 and also did an HA Core 11.2 upgrade. Now Im getting the following error
2022-11-12 22:12:46.863 ERROR (Recorder) [homeassistant.components.recorder.util] Error executing query: (psycopg2.errors.UndefinedFile) could not load library “/usr/lib/postgresql14/llvmjit.so”: Error loading shared library libLLVM-12.so: No such file or directory (needed by /usr/lib/postgresql14/llvmjit.so)

Ive checked the timescaledb container and the file is there. Ive deactivated jit for now which solves the problem but rather have a real fix. Anyone else facing this with the newest version?

I have the same issue with some queries in grafana.

There is already a bug report filed for that: Error loading shared library libLLVM-12.so · Issue #27 · Expaso/hassos-addon-timescaledb · GitHub

Thanks for filing the bug. I have put my workaround there

I am creating a fix for this issue now.

It seems it’s missing a crucial package. See the bugreport for possible workarounds.

A bugfix it out!
v2.1.1 should fix this issue, and any workarounds can be reverted.

1 Like

I run the addon unattended for months and it seems to behave properly. Please note that Postgres is using shared buffers to cache stuff, but if this memory is really needed by HassOS, it will release it back to the OS.

Do you observe any issues when you simply let this run?

I see a constant growth in database size since I have been using it for the past 1,5 months, even though the states table (which is the largest in my database) only contains 10 days of history. The normal Home Assistant purge and repack doesn’t seem to give back disk space. Does anybody use pg_repack for this or do you have other alternatives?

Do you only use this database for the recorder? Or do you also have the LTSS integration?

I use both in the same database.

– Edit –
Ah. I think I see your underlying point :slight_smile: pg_psize_pretty only gives the size of a normal table, where hypertable_size() gives the size of the table LTSS is using. The LTSS table is by far the largest and is accounting for the increasing size.

– Edit #2
Maybe I should look into down sampling or sampling my smart meter data. Recording and storing every 5 second interval is a bit too much.

Well, that accounts for the size! :slight_smile:

Every 5 second interval is fine. I record some sensors every 500ms, BUT… since LTSS is NOT removing data (that’s its purpuse), you should indeed think about downsampling or compression.

Please read Pinned: Additional Documentation for Timescaledb Integration · Issue #1 · Expaso/hassos-addons (github.com) for more details.

1 Like

Can the normal HA backup be done on a running system or should the database be shutdown before running the backup?