Raspberry Pi 3B + with Hassio
The NAS carries the main HA installation and the pi, only bluetooth and zigbee antennas which relay their information to the main via MQTT. I would have preferred to share ser2net but it seems complicated to implement …
Tonight, I decided to update the supervisor and the core. 2 hours later, the VM is finally up to date and the raspberry is planted. This is the 2nd time this week that the raspberry crashes (with 2 different and good quality SD cards).
HA is certainly great, but now I am totally disheartened. I spend more time stabilizing it (and reinstalling it) than actually using and configuring it.
Am I doing it wrong? Is my conf weird?
What do you recommend ? Would adding RAM or vCPU improve performance?
List of installed Add-ons: AdGuard, File Editor, Grafana, InfluxDB, Log Viewer, MariaDB, Mosquitto brocker, Nginx SSL Proxy, Node-Red, Portainer, SSH & Web Terminal, Samba Share, Zwave JS to MQTT, PhpMyAdmin
First, with the amount of containers you have running, you’re overtaxing your NAS greatly (unless it’s a high-end NAS with a LOT of RAM and a good CPU). You have 14 add-ons (not counting HA itself, which is a minimum of 2). So, that’s 16 containers running on a virtual system within a low-end NAS device that isn’t meant for computing power. InfluxDB and MariaDB are both heavy CPU/RAM consumers and will tear through a low-to-medium end NAS easily and I’m willing to bet that is exactly what’s happening. The NAS is trying to run all those containers in a single 2 vCPU/2GB shared RAM VM. It’s no wonder it takes 2+ hours to update and nothing is stable.
As for the Pi, they are single board computers and they do die. I’ve got a small graveyard of them.
Thank you for this feedback. It’s interesting.
MariaDB saves logs and InfluxDB, logs usable in Grafana … That’s why I have both. Maybe I could do without InfluxDB and put everything on MariaDB? On the forum, I was advised to switch to MariaDB to avoid having a too large HomeAssistant.db file.
The NAS is a Synology DS718, with 6 GB of RAM. It is indeed not a Ferrari but we are just talking about Home Assistant. I’m not asking to run a video game !! The system is a little greedy anyway !!
What do you run your Home Assistant on?
NB: I do not plan to buy another raspberry given the reliability of the one I have (and which is new!).
I wanted to have a solution that would allow me to connect all my USB keys: Bluetooth, Zigbee, Zwave and RFX433. You think all of these keys would work with ser2net, if so why not …?
Right, but you aren’t running just Home Assistant. You have 14 other containers running as well with 2 of those being database engines. You have to keep that in mind. Plus, I’m sure you have other things running on the NAS as well, so that’s also taking computing power away from other processes.
Personally, I would move InfluxDB and MariaDB to their own machine. Even if you got an old computer or a rPi4 (4GB+ version) with a SSD drive (SD cards are horrible to run database servers on). I would also move the MQTT (mosquitto) server off and put it with the databases.
Another thing to do would be to increase the RAM and CPU to the VM. That will help at least a little bit.
I’ve run HA on tons of different devices. Currently, I have it running on a 16GB RAM/i7 CPU (8 core) Dell laptop. But, I have backup instances running on rPi4s (4GB versions) and also an old Dell Latitude laptop. However, I don’t run Supervised. I run Core only in a Docker container.
This could also present you with issues unless you have them plugged into USB extension cables. Zigbee and Bluetooth in particular both operate in the same 2.4ghz wireless range and will interfere with each other. If you can put each of them on 3-6 foot extension cables and keep them away from each other, you’ll have a much better experience.
It’s funny but when I started with HA some 4 years ago, I used a pi3b, I later moved to a pi4 and then a NUC but each time I repurposed the old pi as a test instance ( I think I have about 15 pi’s but only these 2 above have run HA) The point is, they still do. The 3b has run HA continously for over 4 years. So don’t log EVERYTHING. Don’t hammer your database, minimise history to what’s needed and keep “addins” to “functionally required”.
Oh yeah, I’ve still got some rPi2s running around my network (mostly as backup DNS and DHCP servers) that have been in place for at least 6 or 7 years now. But, I’ve noticed the 3B is WAY more reliable than the 4s have been. Out of the last 8 rPi4s I’ve bought, 2 of them died within the first year. The other 6 are still going, but 2 of them have dead HDMI ports on them.
Well understood !
I shut down all containers not essential to my immediate needs, changed my config.yaml to take it into account and just initiated a reboot.
To answer the questions, indeed, the NAS also carries the video recording of 4 IP cameras and that consumes resources!
I have in mind your recommendations for the distribution of roles on several machines. I wanted to avoid that to have a centralized system, easy to maintain and electrically backed up. I wonder if Home Assistant is finally within my reach …
I’m not surprised with that boot time at all. You could always have your “centralized” system, but you need to make sure that you have the resources to support everything you need to run. Personally, I’m not a fan of running anything but storage type applications on my NAS and everything else goes on other machines.
One thing that has worked REALLY well for me is an old Mac Mini (Late 2012) with 16GB RAM and a decent SSD in it. They work as great servers and you can pick them up used on eBay for around $200 US. They have enough processing power to run everything that you need to plus room to grow. That’s the route I would go.
I’d invest in shares of the Raspberry Foundation, if I could… I just counted, seems, I lost track a little bit… Right now there are seven Pis running here at home
Do yourself a favor and seperate things, at least a little. In your case every defect ruins all of your systems. Just think about maintenance, that would be a nightmare for me.
My HA system is running on an older NUC i3 with 16gb ram.
I have three HA instances running (two Container and one Supervised along with all of it’s related containers) and around 20 other containers including mariadb, influxdb and grafana.
I have two MQTT brokers running (one in a container and one on the host)
And I have a Kodi media server running on the same machine continuosly.
My CPU right now is 19% and my memory use is at 32%
My restart time is around 30 seconds and my reboot time is about 1.5 - 2 minutes.
Have you loaded the system monitor to see what kind of performance and resource usage you are seeing with your system?
Interesting observation. Any more details on what your Pi4s died of ? Was thinking of upgrading HA onto a Pi4 myself at some point.
I’ve been running HA on a Pi3 for over a year and a half now, Domoticz before that on the same Pi, with an SD card. Never had any issues. I’m running a bare metal venv installation of HA, no Docker or Supervisor. The same Pi is running Mosquitto, a VNC server, RFXCOM bridge daemon and some other stuff. Database is on a NAS though. CPU is at around 2-3% usually, unless I stream a camera. HA startup time (excluding Zwave network) is a little over a minute, but I don’t restart very often.
I guess the performance issue people usually have on low spec hardware is mostly due to the overhead of several virtualization layers. And if they run a ton of addons, each with their own Docker container and associated overhead, that all adds up. As far as I see it, if you run on an embedded SBC, run bare metal and remove all unnecessary overhead.