MariaDB addon or external server

Hi

I’m currently running Home Assistant as a VM in Proxmox. I have little to no issues with the current SQLite database, but it is starting to take some time to open up the history tab and I’ll be adding extra stuff later on, so I wanted to make my setup future-proof. I think it’ll be better to store data in MariaDB but I’m not sure if it would be better to use the addon or a separate instance (in Proxmox or on my NAS).

What will happen if I update Home Assistant, but I have to revert to a backup in case of a serious bug? With an external database, my data will be kept, but it might not be compatible with an older version of Home Assistant. So database and HA backups should be kept together somehow.

If I stay with SQLite or use the addon, data since the last backup will be lost I guess if I need to revert.

What would be the smartest way to set this up? How are you guys managing data and backups?

Thanks for your advice!

1 Like

I use an external server, if I used the addon then the database would be included in full backups. Since my MariaDB database size is pretty massive as it is - around 4-5GB. Then I would rather not have that inflating the size of backups. In the event that I have some catastrophic failure, I’m more than happy that I can restore Home Assistant from the Google Drive backup add-on, and if I lost my historical data, it’s not the end of the world.

There are ups and downs to both. I opted for having my MariaDB on the same system as HA so I didn’t have to rely on two systems to run my home automation. My backups are larger but disk space is cheap so I’m OK with that (I have a 256GB SSD for HA plus all backups push via Samba to another system).

I strongly considered a second system for the DB but then wrestled with “what if the DB server is down?” At least if HA is down, so is the DB so there won’t be a deluge of problems.

In the end it’s probably 50/50, I can’t find any more faults using one system than I can using two.

I don’t use add-ons so that point is moot. But I do use MariaDB.

I run both HA and MariaDB in Docker on the same machine.

Kind of the best of both worlds.

Also using HA/MdB in one docker, the only issues I had were in the restart of the machine…if MariaDB container does not start quick enough then HA will fail, so I recommend to add some dependency (or delay). My db is just 2.5Gb and tracking usage mem/cpu, hardly ever beyond 5%

Thanks for your insights. It’s still difficult to choose. If the database structure is stable, I think I’d go with an external server. Otherwise, the addon is probably the easier solution. I guess the addon will be running before HA is started?

Maybe late to join the party but having thoughts on this. I am running HA as a docker now with all sorts of other docker container talking to it vise versa. Fine if you want to be a system administrator at home, but slowly getting fed up with it (backup is a nightmare to setup).
Now looking at ProxMox with HAOS as a replacement and a seperate DB would be problematic if you want to test something in a temporary VM. Two instances talking to the same database is a nono. With that in mind I am going to opt for the MariaDB addon, which will fully seperate instances.
For the same reason I would like to keep Zigbee2MQTT and Zwave 2MQTT as a non addon instance.

1 Like

Should not be, I am using duplicati which does incremental ones. I am not sure what HAOS will reduce on Sysadmin stuff…a bit it would I guess as you have all under one page but add-ons are also containers

Let me add some reasonable insight after using HA OS for 2 years.

I have been running everything as an Addon (MariaDB, zigbee2mqtt, node-red) and told myself all is ok because I have snapshots (stored remotely on GDrive).

That goes fine until it doesn’t.

Within the first year I’ve had come to the point of having to restore from a Snapshot after doing Updates.

On disaster recovery you quickly notice just how minimal and limited HAOS is as soon as you drop to the shell

(that you will have to enable first).

Now since I also used HACS I stopped doing updates, because I couldn’t afford to spend any time doing that again in the following months.

That was last december (2021)

Now just recently also out of the blue somehow my supervisor got upgraded to the latest version.

I don’t know if its related to it or not (or if it only happened after emergency repair tries) but my frontend became unusable (nothing in the logs) and my DB quickly grew twice the size within a month.

All fine I have plenty snapshots, right?

Well … no.

Even with a clean install on a VM and a restore with those same snapshots everything behaves the same - the frontend will become unresponsive and unusable.

Now I could spend a very long time trying to read the logs what the heck is happening / how and if I’m lucky find a way to fix it to get back a running system.

Conclusion:

The pain is much smaller if you host important bits of pieces like MariaDB, zigbee2mqtt, mosquitto, node-red in seperate VMs/LXC-Containers/Bare-Metal where you can just drop in to an emergency shell to do maintenance properly.

Yes you will have to have a seperate backup/snapshoting strategy - but this will also enable you to purge the HA VM more easily or to restore snapshots without losing any data as well.

TLDR: Would argue strongly against addons unless you enjoy having to re-setup everything again when things go south (and they will)

1 Like

It’s likely that your supervisor was updating itself but in the background and you never noticed since the supervisor has always auto-updated (until just recently but you have to disable the auto-update as it’s enabled by default. It’s just that something went wrong this time and it caught your attention.

But in general I agree with you. That’s why I run a HA Container install type.

1 Like

Same here now and the Experience couldn’t be so much different on an RasPi 4G 8GB …

Brand new clean QEmu VM or Baremetal HAOS install there’s always 25% load (4-Cores) and if you leave it running for a few minutes 2.5GB Ram consumed.

That is with litterally nothing installed. No Addon, No Node-Red, no zigbee2mqtt, No Snapshot restored nothing.

HA Core on Debian 11 in Docker has 0.02% Load on average with 562 MB Ram used - but thats with HACS and a ton of customizations, MQTT entities and a custom theme already and it running for a few days.

Migrated zigbee2mqtt, mosquitto and node-red to their own LXC containers and they consume barely no load at all too and I can now debug them properly since they all use a debian base.

Everything is much much smoother now, no matter if there’s no load or a shit ton of it.

Snapshots can be done trough BTRFS and backups via NFS.

ZFS/ZOL was to unstable for me (like always, had the same experience on X86 Ubuntu as soon as there was a little bit of load).

So CORE is entirely fine but Supervisor / HAOS seems to be broken for what it does (basically a Portainer based install on steroids, which you can also mock if you install Portainer itself).

For a Raspberry or a low power X86 install …

I’d recommend anyone to install PiMox/PROXMOX instead and go for a LXC based install.

So much more efficient and a whole different experience.

No longer does it feel sluggy - and I restored every single functionality of node-red and zigbee2mqtt integration easily too.

You can use all LXC scripts of tteck on arm too if you modify the create_lxc.sh script to use a hardcoded aarch64 debian 11 cloud image instead (thats what I did):

1 Like

Mariadb LXC is is working verywell while using in the same network for mariadb and home assistant.

But I’m trying to put mariadb lxc in other network (difference home) and try to connect with couldflared tunnel, my host name ==> tcp://mariadb_ipaddress:3306
But it can not connect with home assistant. why ?

please help to provide advise for this case, thank you.