Home Assistant Add-on: PostgreSQL + TimescaleDB

Any ideas what im doing wrong?

I did follow this guide to migrate sqlite to postgres

This is the file for pgloader from another linux

load database
  from sqlite:///tmp/home-assistant_v2.db
  into postgresql://postgres:[email protected]/homeassistant
with data only, drop indexes, reset sequences, truncate, batch rows = 1000
SET work_mem to '32 MB', maintenance_work_mem to '64 MB';

Password is correct for sure as I can use pgadmin addon with same credentials.

This is what I get while using pgloader

sudo pgloader migrate.sql
KABOOM!
FATAL error: Failed to connect to pgsql at "192.168.1.11" (port 5432) as user "postgres": Database error: Socket error in "connect": ECONNREFUSED (Connection refused)
An unhandled error condition has been signalled:
   Failed to connect to pgsql at "192.168.1.11" (port 5432) as user "postgres": Database error: Socket error in "connect": ECONNREFUSED (Connection refused)

What I am doing here?

Failed to connect to pgsql at "192.168.1.11" (port 5432) as user "postgres": Database error: Socket error in "connect": ECONNREFUSED (Connection refused)

I’m using default settings of this addon as I’ve no clue from where I could find pg_hba.conf from Home Assistant OS?

Configuration:

telemetry: basic
maxmemory: 512MB
maxcpus: 4
max_connections: 20
databases: homeassistant
timescale_enabled: homeassistant

Hi @juhaa!!

Thank you for reaching out!

The Postgresql instance is by default not exposed to the outside world (because that is more secure), but you can enable this easily in the settings pane of the addon, and restart the addon:

In this screenshot, you’ll see that my postgresql-instance is exposed to the LAN on port 45678 (because 5432 was used by something else in HA).

So, your postgresql URL would become something like:

postgresql://postgres:[email protected]:45678/homeassistant

(assuming your homeassistant is on 192.168.1.11 on your local network).

Can you please try this?

1 Like

@Expaso Thank you! It did do the trick and it was simple enough even for me :slight_smile:

Now im able to connect it from LAN and I can try to proceed with SQLite migration.

Great to hear!
Good luck with the migration :+1:t2: :slight_smile:

Thanks :slight_smile:
Maybe in another day…

/tmp$ sudo pgloader migrate.sql
KABOOM!
FATAL error: Failed to connect to pgsql at "192.168.1.11" (port 45678) as user "postgres": 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).
An unhandled error condition has been signalled:
   Failed to connect to pgsql at "192.168.1.11" (port 45678) as user "postgres": 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).

What I am doing here?

Failed to connect to pgsql at "192.168.1.11" (port 45678) as user "postgres": 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).

Waiting for the monitor thread to complete.

pgloader issue might be related with this: pgloader 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8). · Issue #1460 · dimitri/pgloader · GitHub

[FEATURE] Add support for postgres scram-sha-256 authentication · Issue #1207 · dimitri/pgloader · GitHub

edit: Really not sure what im doing :slight_smile:
Anyway. I took backup of this addon to reach out the files and looks like it’s using Default scram-sha-256 as this is commented out in postgres.conf file?

#password_encryption = scram-sha-256	# scram-sha-256 or md5

Not sure is there anyway to edit that file directly in HA, but could this workaround work. I’ll take backup and edit postgres.conf from it and then throw that backup back to HA? :smiley:

This just in case there are no easier option to get it to work.

Ok. I think I made it… I used Dbeaver to import sqlite data and then adjusted Postgres seq’s with below statements.
Only lost 24h data from it as didn’t do it yesterday and I did not want to wonder how I can get all ID values set to not get all constrait violations while keeping data which were on postgres after the change.

SELECT SETVAL('public.event_data_data_id_seq', COALESCE(MAX(data_id), 1)) FROM public."event_data";
SELECT SETVAL('public.event_types_event_type_id_seq', COALESCE(MAX(event_type_id), 1)) FROM public."event_types";
SELECT SETVAL('public.events_event_id_seq', COALESCE(MAX(event_id), 1)) FROM public."events";
SELECT SETVAL('public.recorder_runs_run_id_seq', COALESCE(MAX(run_id), 1)) FROM public."recorder_runs";
SELECT SETVAL('public.schema_changes_change_id_seq', COALESCE(MAX(change_id), 1)) FROM public."schema_changes";
SELECT SETVAL('public.state_attributes_attributes_id_seq', COALESCE(MAX(attributes_id), 1)) FROM public."state_attributes";
SELECT SETVAL('public.states_state_id_seq', COALESCE(MAX(state_id), 1)) FROM public."states";
SELECT SETVAL('public.states_meta_metadata_id_seq', COALESCE(MAX(metadata_id), 1)) FROM public."states_meta";
SELECT SETVAL('public.statistics_id_seq', COALESCE(MAX(id), 1)) FROM public."statistics";
SELECT SETVAL('public.statistics_meta_id_seq', COALESCE(MAX(id), 1)) FROM public."statistics_meta";
SELECT SETVAL('public.statistics_runs_run_id_seq', COALESCE(MAX(run_id), 1)) FROM public."statistics_runs";
SELECT SETVAL('public.statistics_short_term_id_seq', COALESCE(MAX(id), 1)) FROM public."statistics_short_term";

Right now, I have another solution to backup my database: Im running another Postgres container on Synology NAS, and using this command on the task scheduler:

docker exec -i postgres2 /bin/bash -c "PGPASSWORD=xxx pg_dumpall -h 192.168.178.xxx -p 5959 --username xxx" | gzip > /volume1/Backups/PostgreSQL/homeassistant_all.tar.gz

It creates a backup every week. Gladly I was never in the situation to backup the database, so would this command make a complete backup that I could restore without a problem in case I have to backup? I get only this warning: pg_dump: warning: there are circular foreign-key constraints on this table: without more information

Hi @drimpart ,

I’ve just released a new version of the addon, whereby the pgAgent scripts are also fixed.

For backups, see: How To Schedule Automatic Backups for PostgreSQL with pgAgent in pgAdmin | DigitalOcean

An example command would be:

/usr/libexec/postgresql15/pg_dump --username=postgres --dbname=homeassistant --clean --file=/backup/homeassistant-`date +%Y-%m-%d-%H-%M-%S`.sql

This would place backups to the homeassistant backup folder, which can be on network storage nowadays.

this backup is NOT compressed. If you need compression on the backup, see: Using compression with PostgreSQL’s pg_dump – Dan Langille’s Other Diary

Please feel free to contact me if you need any help.

2 Likes

Works like a charm, thank you very much!
Backup is now faster, smaller and does not interrupt data acquisition anymore. :ok_hand:

Edit: I guess pg_agent job using script gives permission denied error in TimescaleDb · Issue #6 · Expaso/hassos-addon-pgadmin4 · GitHub can be closed now.

2 Likes

do you backup with shell_command? like for example:

backup_timescaledb: sudo docker exec 5a6841163e64 /usr/libexec/postgresql15/pg_dump --username=postgres --dbname=homeassistant --clean --file=/backup/homeassistant-date +%Y-%m-%d-%H-%M-%S.sql

No, I followed the pgAgent instructions @Expaso posted and used the command /usr/libexec/postgresql15/pg_dump -Fc --username=postgres --dbname=homeassistant --clean --file=/share/db_backup.sql as a step in my pgAgent job, i.e. I’m performing the backup within the container, not from the outside.

Hmm, pgAgent seems to be quite unreliable in terms of marking jobs as done.
Most of the scheduled backups so far were stuck in state ‘r’ (running) and never set to ‘s’ (success), even though the pg_dump process itself worked just fine.

This means that the backup is run exactly once and then never again (since it is still “running”). Have to reset the jobs manually.
Anyone else experiencing this?

Hmm will do some more testing myself…
Do you have any additional logging?

Unfortunately no. Not much going on.

Just tried a simple echo "Hello World" job which is also stuck on ‘running’.

I’ve been fighting this one today, and have had various results.
It basically boils down to pgAgent running the job just fine, but when the script is finished, the connection used to write back the job result is gone, leaving the job on running, thus failing to start again.

I really have no clue why the connection to postgres is gone after running the script, but I am tempted to modify this piece of pgAgent to make it a whole lot more resilient…

Oh yeah: And when running with all debug logging enabled, all is just fine :wink: … great.

" but I am tempted to modify this piece of pgAgent to make it a whole lot more resilient…"

I did just that. @tldp Can you try version 3.0.2, to see if this is fixed now?

1 Like

Seems to work perfectly fine now! Old jobs were marked as ‘d’ - aborted - and my hello world test job every 5min works reliably since the update.

Many thanks for your quick fix!

1 Like

Is there a way to run this postgres with ssl require support?

How can i solved this “problem” with Timescale/dsmrreader:

Error in the “timescale addon log” :

127.0.0.1 - - [14/Sep/2023:16:27:11 +0200] "GET /about HTTP/1.1" 200 15212 "-" "curl/7.83.1" "-" 127.0.0.1 - - [14/Sep/2023:16:27:26 +0200] "GET /about HTTP/1.1" 200 15212 "-" "curl/7.83.1" "-" pg_dump: error: server version: 15.3; pg_dump version: 14.5 pg_dump: error: aborting because of server version mismatch 2023-09-14 16:27:39,965 ERROR schedule execute_scheduled_processes 39 | (AttributeError) dsmr_backup.services.backup.run errored: 'NoneType' object has no attribute "read''

Thanks in advanced

Is this add-on still actively supported? Asking because there was no new commit on the repo for 5 months and this thread is pretty quiet as well.