Postgresql Database Integration and others

Thanks so much to everyone yesterday in the ‘Configuration’ section of the forum, it was a big help and a great solution to my problem of getting my GCE IPX controller communicating with HASS.

At the same time I also managed to get the internal Sqlite db migrated over to an external Postgresql db as the hass v2.db file grew to over 12GB and things started to become really slow.

My initial thought was to use MySQL but after lot’s of reading it turns out that HASS supports many different types of DB and among them was my favorite Postgres.

Here is the list of links in case anyone is interested:

https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL

https://docs.sqlalchemy.org/en/13/errors.html#error-gkpj

https://www.google.com/search?client=firefox-b-d&q=postgres+rename+table

So basically here is what I have done:

  1. The first thing was to dump the sqlite db after shutting down HASS
sqlite database .dump > /the/path/to/sqlite-dumpfile.sql
  1. Next is convert the dump over to pgsql using pgloader (tip: make sure it is pgloader v3)
    2.1 First create the db file in Postgres and the hass user
create database <db_name>;
create user <uname> with password <pass>;

2.2 Then create a command file for pgloader

load database  
    from 'db.sqlite3'  
    into postgresql:///testdb 

with include drop, create tables, create indexes, reset sequences  

set work_mem to '16MB', maintenance_work_mem to '512 MB';

2.3 Run pgloader

pgloader command

2.4 Add the db path to hass configuration.yaml

recorder:
  db_url: postgresql://user:pass@<pg_ip>/hass_db

2.5 Start hass and enjoy :wink:

Note: Just a quick note on my side, I had some issues with hass complaining about NULL constraints within the table ‘recorder_runs’. I basically just backed the table up by renaming it and letting hass create an new one

alter table recorder_runs rename to recorder_runs_backup;

Here are some screenshots of what I have been able to do with hass so far including a long list of things to work on. The entrance lighting is a little redundant now as I have tied the lights into GPS using ‘device_tracker’, I have 3 different views; 1 for phones, 1 for the on/in-wall SIBO Q8919 10" tablets, 1 for desktop/laptop PC:

This last picture which shows the ‘Panel View’ is monitoring my 2 meter high ABB DIN panel containing my electric, distribution and automation systems:

3 Likes

I to converted over to postgresql for my HA database. I have been very happy with it’s reliability and performance. I currently have over 120 million state rows in one instance, with zero database problems.

You might find one of these tools useful in querying and editing your HA database in postgresql. I use the first in it’s free mode, it has been solid.

TablePlus
https://tableplus.com/

Beekeeper

Thanks for your guidance, very interesting.
I also use postgressql, to be more precise Timescale db. But I have created two separate databases. One for the recorder, which contains only the last 3 days. So the logbook and other functions within HA are more performant imho.
For the long term storage of states I use ltss cusom component to analyze the data with graphana or python afterwards. Custom component: Long Time State Storage (LTSS) utilizing TimescaleDB

The good thing with Postgres is that it is a rock solid DB solution. Mysql is also a good option but given the choice between the two PgSQL is my preference.

Yes, I agree there are many good tools to make life easier when querying and editing databases. Though it is important to always learn to command line tools available too as they will teach you the most.

This is also a wise option as sometimes you don’t want to let your db grow too large.

Postgres does have an auto vacuum option which in my case has been very useful for the many years I have been running pgsql.
Also another option is to run it on a ZFS file system with compression enabled. FreeNAS which is FreeBSD based should allow for doing this, but for my needs I run both Mysql and Psql in a FreeBSD jail. (Though now ZFS has also been ported to Linux, or of course the original still does exist in the form of Solaris)
The great thing about FreeBSD jails is that they act like an advanced chroot environment so you can assign a seperate IP address to the specific jail. They aren’t as advanced as Solaris Zones but they are still really great for many things which can run on the same server if you have enough power and memory. Not to scare anyone here into thinking you need to run a 36 core Xeon with 512GB RAM :stuck_out_tongue: , an Atom or Celeron system with around 8GB RAM is a pretty powerful beast! <- just make sure to not to run any X11 stuff…
Though you can do it with way less memory eg. 2GB :wink:

Very interesting. Thanks for the insights into your infrastructure. This shows once again that many roads can lead to the goal. :wink:

My database runs as a docker container over MAC VLAN network and therefore also has its own network address too. I use Openmediavault as host system, which is based on Debian. Handling it is a bit more comfortable for me than with BSD Jails. As filesystem I use BTRFS, also with compression enabled. The hunger for RAM is a bit less than with ZFS.

I am in dire need of that, as I only have a crummy home server, which consists of a resuscitated old desktop PC and has only 8 GB RAM in total :smiley:

That’s really the beauty of having a choice and having many different projects/products/software to achieve a common a goal.
Of course everyone’s style and requirements are different so it is wrong to dictate only a single method or route to achieve the result.

Really it is a big hats-off to the devs for making HASS integrate with so many different backends, even Oracle db which is usually only run in enterprises.

Though I must admit that old home desktops do make for excellent servers, until your system and needs grow out of them and need something a little larger :laughing:

1 Like