I did follow this guide to migrate sqlite to postgres
This is the file for pgloader from another linux
load database
from sqlite:///tmp/home-assistant_v2.db
into postgresql://postgres:[email protected]/homeassistant
with data only, drop indexes, reset sequences, truncate, batch rows = 1000
SET work_mem to '32 MB', maintenance_work_mem to '64 MB';
Password is correct for sure as I can use pgadmin addon with same credentials.
This is what I get while using pgloader
sudo pgloader migrate.sql
KABOOM!
FATAL error: Failed to connect to pgsql at "192.168.1.11" (port 5432) as user "postgres": Database error: Socket error in "connect": ECONNREFUSED (Connection refused)
An unhandled error condition has been signalled:
Failed to connect to pgsql at "192.168.1.11" (port 5432) as user "postgres": Database error: Socket error in "connect": ECONNREFUSED (Connection refused)
What I am doing here?
Failed to connect to pgsql at "192.168.1.11" (port 5432) as user "postgres": Database error: Socket error in "connect": ECONNREFUSED (Connection refused)
I’m using default settings of this addon as I’ve no clue from where I could find pg_hba.conf from Home Assistant OS?
The Postgresql instance is by default not exposed to the outside world (because that is more secure), but you can enable this easily in the settings pane of the addon, and restart the addon:
/tmp$ sudo pgloader migrate.sql
KABOOM!
FATAL error: Failed to connect to pgsql at "192.168.1.11" (port 45678) as user "postgres": 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).
An unhandled error condition has been signalled:
Failed to connect to pgsql at "192.168.1.11" (port 45678) as user "postgres": 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).
What I am doing here?
Failed to connect to pgsql at "192.168.1.11" (port 45678) as user "postgres": 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).
Waiting for the monitor thread to complete.
edit: Really not sure what im doing
Anyway. I took backup of this addon to reach out the files and looks like it’s using Default scram-sha-256 as this is commented out in postgres.conf file?
#password_encryption = scram-sha-256 # scram-sha-256 or md5
Not sure is there anyway to edit that file directly in HA, but could this workaround work. I’ll take backup and edit postgres.conf from it and then throw that backup back to HA?
This just in case there are no easier option to get it to work.
Ok. I think I made it… I used Dbeaver to import sqlite data and then adjusted Postgres seq’s with below statements.
Only lost 24h data from it as didn’t do it yesterday and I did not want to wonder how I can get all ID values set to not get all constrait violations while keeping data which were on postgres after the change.
SELECT SETVAL('public.event_data_data_id_seq', COALESCE(MAX(data_id), 1)) FROM public."event_data";
SELECT SETVAL('public.event_types_event_type_id_seq', COALESCE(MAX(event_type_id), 1)) FROM public."event_types";
SELECT SETVAL('public.events_event_id_seq', COALESCE(MAX(event_id), 1)) FROM public."events";
SELECT SETVAL('public.recorder_runs_run_id_seq', COALESCE(MAX(run_id), 1)) FROM public."recorder_runs";
SELECT SETVAL('public.schema_changes_change_id_seq', COALESCE(MAX(change_id), 1)) FROM public."schema_changes";
SELECT SETVAL('public.state_attributes_attributes_id_seq', COALESCE(MAX(attributes_id), 1)) FROM public."state_attributes";
SELECT SETVAL('public.states_state_id_seq', COALESCE(MAX(state_id), 1)) FROM public."states";
SELECT SETVAL('public.states_meta_metadata_id_seq', COALESCE(MAX(metadata_id), 1)) FROM public."states_meta";
SELECT SETVAL('public.statistics_id_seq', COALESCE(MAX(id), 1)) FROM public."statistics";
SELECT SETVAL('public.statistics_meta_id_seq', COALESCE(MAX(id), 1)) FROM public."statistics_meta";
SELECT SETVAL('public.statistics_runs_run_id_seq', COALESCE(MAX(run_id), 1)) FROM public."statistics_runs";
SELECT SETVAL('public.statistics_short_term_id_seq', COALESCE(MAX(id), 1)) FROM public."statistics_short_term";
Right now, I have another solution to backup my database: Im running another Postgres container on Synology NAS, and using this command on the task scheduler:
It creates a backup every week. Gladly I was never in the situation to backup the database, so would this command make a complete backup that I could restore without a problem in case I have to backup? I get only this warning: pg_dump: warning: there are circular foreign-key constraints on this table: without more information
No, I followed the pgAgent instructions @Expaso posted and used the command /usr/libexec/postgresql15/pg_dump -Fc --username=postgres --dbname=homeassistant --clean --file=/share/db_backup.sql as a step in my pgAgent job, i.e. I’m performing the backup within the container, not from the outside.
Hmm, pgAgent seems to be quite unreliable in terms of marking jobs as done.
Most of the scheduled backups so far were stuck in state ‘r’ (running) and never set to ‘s’ (success), even though the pg_dump process itself worked just fine.
This means that the backup is run exactly once and then never again (since it is still “running”). Have to reset the jobs manually.
Anyone else experiencing this?
I’ve been fighting this one today, and have had various results.
It basically boils down to pgAgent running the job just fine, but when the script is finished, the connection used to write back the job result is gone, leaving the job on running, thus failing to start again.
I really have no clue why the connection to postgres is gone after running the script, but I am tempted to modify this piece of pgAgent to make it a whole lot more resilient…
Oh yeah: And when running with all debug logging enabled, all is just fine … great.