Home Assistant Add-on: PostgreSQL + TimescaleDB

This should do the Trick:

SELECT
    $__timeGroupAlias("time",$__interval)
    , 'Hoeveelheid neerslag'
    , MAX(state::FLOAT) - MIN(state::FLOAT) AS value
FROM ltss
WHERE
    $__timeFilter("time")
    AND entity_id = 'sensor.regensensor_hoeveelheid_regen'
    AND state NOT IN ('', 'unavailable')
GROUP BY 1,2
ORDER BY 1,2

And change your time intervall like described here:

So… I was searching and searching on the Internet and your query script was deviating not so much from the original one!!
However there is still a complication. See the Grafana graph below:

All bars show a value of 0.5 mm whereas in some hourly periods more rain has fallen (20:00 h yesterday 1.0 mm; 04:00 h and 05:00 last night 1.0 mm; last hour 1.0 mm). What can be the reason that only 0.5 mm is shown in the graph??

Did you change the time interval in the query options to 1h as well?
What does the data look like when you make a query directly on your table?

I also don’t know if this is getting so offtopic and still has something to do with the actual HA addon?
Maybe we should open a separate thread for Grafana with Timsecale or something?

1 Like

I made two Grafana graphs:
Graph 1: time interval in query options:

Graph 2: time interval directly in the query:

Both are leading to the same result.
Your question (What does the data look like when you make a query directly on your table?) is answered by Graph 2??.

I agree with you that it might be better to open a separate thread. This afternoon, when I started this question, I choose for this thread because I knew that you and also @Expaso are active on this thread.

What I meant by “make a query directly on your table” is that I would be interested to see what the data looks like when you look at it directly in the table without Grafana, i.e. with any database tool.
It’s hard to guess what’s going on in the background if we don’t know the actual data.

Would you start a new thread?

I opened a new thread (see: Making graphs in Grafana from PostgreSQL/ TimescaleDB databases).

Make a query directly on the table! How do I do that! On my Home Assistant system I have PgAdmin running (as an add-on). But I have the impression that PgAdmin is not suitable for carrying out the task that you suggest. Are there other possibilities?

1 Like

Hi Expaso,

Just wanted to let you know that, after transferring my HA installation on a Pi 4B 4 GB I still get the same error message Error relocating /usr/lib/postgresql/postgis-3.so: __time64: symbol not found.

So unless I did something really stupid it looks like there is a bug somewhere at install time with postgis-3.so. Weird thing is that I seem to be the only one getting this error, so maybe I did do something really stupid after all…

Hi @Motik68,

You need the x64 bit image for your PI4.
Only 64-Bit Operation systems can address > 2GB mem.

Raspberry Pi - Home Assistant (home-assistant.io)

Then it should work fine :slight_smile:

1 Like

Hi,
i really appreciate your work. Is it possible to enhance the timescaledb within hass.io with say plpythonu. I would need this extension to bring intelligent features into the database using functions.

Toni

Hmm why not!
I have to do a major upgrade soon anyway, because I plan to upgrade to Postgresql 13.

Out of curiosity, what intelligent features are you thinking of?

I had to restore a complete Snapshot today from Home Assistant, and after that, I get a lot of these errors from the ltss component:

2021-06-30 20:50:55 ERROR (LTSS) [custom_components.ltss] Error executing query: (psycopg2.errors.InternalError_) right sibling's left-link doesn't match: block 44350 links to 44878 instead of expected 1965 in index "_hyper_1_6_chunk_ltss_entityid_time_composite_idx"

[SQL: INSERT INTO ltss (time, entity_id, state, attributes, location) VALUES (%(time)s, %(entity_id)s, %(state)s, %(attributes)s, ST_GeomFromEWKT(%(location)s)) RETURNING ltss.id]
[parameters: {'time': datetime.datetime(2021, 6, 30, 18, 50, 55, 619863, tzinfo=datetime.timezone.utc), 'entity_id': 'sensor.nuki_connection', 'state': 'True', 'attributes': '{"rssi": -61, "friendly_name": "NUKI Connection"}', 'location': None}]
(Background on this error at: http://sqlalche.me/e/13/2j85)

After a bit of research I found out it might be fixed via REINDEX TABLE ltss; but after I run this, I get this error:

ERROR:  deadlock detected
DETAIL:  Process 4691 waits for AccessExclusiveLock on relation 26541 of database 17639; blocked by process 2015.
Process 2015 waits for AccessShareLock on relation 18740 of database 17639; blocked by process 4691.
HINT:  See server log for query details.
SQL state: 40P01

Does anybody know how to get rid of this errors, or do I have to reinstall and lose all database content?

Hi @drimpart ,

It looks like your database index got corrupted. The re-index fixes this, but this could take come time depending on the size of your data.

The deadlock comes from the reindex job competing against home-assistant trying to insert. Please
check If the reindex job completed successfully, you can ignore this error.
If not, please disable ltss first, run the reindex job, and after that, enable ltss again.

thanks for your reply @Expaso ! My database is already 10gb so it could take a while :slight_smile: I tried VACUUM ltss before and it took around 16 minutes. How can I disable ltss? Do I just have to stop the Timescale DB Addon running or should I delete the ltss in cofiguration.yaml?

Hi All,

I backup my timescaleDB with Home Assistant Google Backup. Now I want restore it on a different Rpi with HA. But after its restored and want start the AddOn I get this error:

also when I use snapshot backup of HA itself I get this error below after restore.

2021-10-14 08:43:10.356 UTC [411] FATAL:  database files are incompatible with server
2021-10-14 08:43:10.356 UTC [411] DETAIL:  The database cluster was initialized without USE_FLOAT8_BYVAL but the server was compiled with USE_FLOAT8_BYVAL.
2021-10-14 08:43:10.356 UTC [411] HINT:  It looks like you need to recompile or initdb.
2021-10-14 08:43:10.357 UTC [411] LOG:  database system is shut down

What can I do about this?

Did a workaround now
Dumped the database, use FTP to move to new RPi, dump back into add-on.

But would be nice to backup whole add don without this workaround… If HA/system crash I can’t do workaround…

Postgres is like, a REAL database, not just an on-disk sqlite database or anything else. I believe backing up and restoring this database will always have to be done seperately from the snapshot procedure. I don’t think it’s reasonable to expect HA to implement a Postgres dump procedure into the backup process.

Hi @poudenes!

I suspect you moved your database from one CPU architecture to another architecture, maybe because you upgraded your PI from RPI3 to RPI4 perhaps?

This indeed gives you issues if you simply copy the files over. There is nothing I can do to prevent that.

I had a chat with Frenck a while ago to see if I could hook into the ‘snapshot’ process to do a pg_dump, but that was not easily feasible. So, the snapshots are used to restore within the same architecture.

To backup the database and restore between multiple versions, like @amirite suggested, use the Postgres pg_dump tool: PostgreSQL: Documentation: 9.1: SQL Dump

The docs describe the exact case about backup/restore between different architectures.

None of those suggestions are right.
Same hardware from Rpi4 to Rpi4
Both Debian Buster 64bit
Only on production Rpi HA docker is 1 version earlier then the test Rpi4

That cannot be the issue.

Now I created a bash script and add it in cron.
Output goes into /usr/share/hassio/share/

Backups also backup this folder. So have always a backup with a dump file.

Hmm… I did not update the addon in between, so you should receive the exact same binaries…

In that case, it’s a mystery to me. There must be a difference in the binaries somehow.

Someone here told me that Postgres is a real database. Must always backup via dump and restore via dump back.

Now testing this out… for now seems to work.

Hahaaaa… yeah well, it’s a real database for sure.
Systems like SQL Server and Oracle all have their own backup methods, so
pg_dump / pg_restore are the tools for Postgres.

Please read the docs carefully, because pg_dump can be used with the -Fc switch to compress the backup file. This might be beneficial.