Another 2 hours of update tonight

I wanted to have a solution that would allow me to connect all my USB keys: Bluetooth, Zigbee, Zwave and RFX433. You think all of these keys would work with ser2net, if so why not …?

Right, but you aren’t running just Home Assistant. You have 14 other containers running as well with 2 of those being database engines. You have to keep that in mind. Plus, I’m sure you have other things running on the NAS as well, so that’s also taking computing power away from other processes.

Personally, I would move InfluxDB and MariaDB to their own machine. Even if you got an old computer or a rPi4 (4GB+ version) with a SSD drive (SD cards are horrible to run database servers on). I would also move the MQTT (mosquitto) server off and put it with the databases.

Another thing to do would be to increase the RAM and CPU to the VM. That will help at least a little bit.

I’ve run HA on tons of different devices. Currently, I have it running on a 16GB RAM/i7 CPU (8 core) Dell laptop. But, I have backup instances running on rPi4s (4GB versions) and also an old Dell Latitude laptop. However, I don’t run Supervised. I run Core only in a Docker container.

This could also present you with issues unless you have them plugged into USB extension cables. Zigbee and Bluetooth in particular both operate in the same 2.4ghz wireless range and will interfere with each other. If you can put each of them on 3-6 foot extension cables and keep them away from each other, you’ll have a much better experience.

I can only confirm zigbee supports Ser2Net :wink:

1 Like

It’s funny but when I started with HA some 4 years ago, I used a pi3b, I later moved to a pi4 and then a NUC but each time I repurposed the old pi as a test instance ( I think I have about 15 pi’s but only these 2 above have run HA) The point is, they still do. The 3b has run HA continously for over 4 years. So don’t log EVERYTHING. Don’t hammer your database, minimise history to what’s needed and keep “addins” to “functionally required”.

1 Like

Oh yeah, I’ve still got some rPi2s running around my network (mostly as backup DNS and DHCP servers) that have been in place for at least 6 or 7 years now. But, I’ve noticed the 3B is WAY more reliable than the 4s have been. Out of the last 8 rPi4s I’ve bought, 2 of them died within the first year. The other 6 are still going, but 2 of them have dead HDMI ports on them.

Well understood !
I shut down all containers not essential to my immediate needs, changed my config.yaml to take it into account and just initiated a reboot.

To answer the questions, indeed, the NAS also carries the video recording of 4 IP cameras and that consumes resources!

I have in mind your recommendations for the distribution of roles on several machines. I wanted to avoid that to have a centralized system, easy to maintain and electrically backed up. I wonder if Home Assistant is finally within my reach …

1 Like

That’s a bugger, as most all of my pi’s run headless, but during setup (with some OS’s) you NEED a monitor.

1 Like

8 minutes for a reboot of HA!

I’m not surprised with that boot time at all. You could always have your “centralized” system, but you need to make sure that you have the resources to support everything you need to run. Personally, I’m not a fan of running anything but storage type applications on my NAS and everything else goes on other machines.

One thing that has worked REALLY well for me is an old Mac Mini (Late 2012) with 16GB RAM and a decent SSD in it. They work as great servers and you can pick them up used on eBay for around $200 US. They have enough processing power to run everything that you need to plus room to grow. That’s the route I would go.

2 Likes

I have a Dell Latitude E6530 - 16 GB RAM with 512 GB SSD, I think I will turn it back on and it will resume service …! At least to test. Thank you :slight_smile:

1 Like

I’d invest in shares of the Raspberry Foundation, if I could… I just counted, seems, I lost track a little bit… Right now there are seven Pis running here at home :astonished: :flushed:

Do yourself a favor and seperate things, at least a little. In your case every defect ruins all of your systems. :frowning: Just think about maintenance, that would be a nightmare for me. :smiley:

My HA system is running on an older NUC i3 with 16gb ram.

I have three HA instances running (two Container and one Supervised along with all of it’s related containers) and around 20 other containers including mariadb, influxdb and grafana.

I have two MQTT brokers running (one in a container and one on the host)

And I have a Kodi media server running on the same machine continuosly.

My CPU right now is 19% and my memory use is at 32%

My restart time is around 30 seconds and my reboot time is about 1.5 - 2 minutes.

Have you loaded the system monitor to see what kind of performance and resource usage you are seeing with your system?

1 Like

Here I’m running on this:

Shuttle XS36V4 with 4 GB RAM

These machines are made to run 24/7 industrial standard. Very reliable.

I have no any issues with 20 integrations, almost 100 devices and 60 automations and still building new ones.

Restart time is onder 1 minute, reboot 3-5 minutes.

Interesting observation. Any more details on what your Pi4s died of ? Was thinking of upgrading HA onto a Pi4 myself at some point.

I’ve been running HA on a Pi3 for over a year and a half now, Domoticz before that on the same Pi, with an SD card. Never had any issues. I’m running a bare metal venv installation of HA, no Docker or Supervisor. The same Pi is running Mosquitto, a VNC server, RFXCOM bridge daemon and some other stuff. Database is on a NAS though. CPU is at around 2-3% usually, unless I stream a camera. HA startup time (excluding Zwave network) is a little over a minute, but I don’t restart very often.

I guess the performance issue people usually have on low spec hardware is mostly due to the overhead of several virtualization layers. And if they run a ton of addons, each with their own Docker container and associated overhead, that all adds up. As far as I see it, if you run on an embedded SBC, run bare metal and remove all unnecessary overhead.

1 Like

Honestly? No. I think I just get really unlucky when it comes to buying batches of rPi4s.

THIS. People tend to forget that ‘add-ons’ are really just docker containers and each container requires resources above and beyond what running the add-on natively needs due to docker basically being a virtualization platform. It’s the same when people run VMs and forget that the host not only has to provide resources for the VM, but also the hypervisor that has to manage said host(s).

1 Like

Continuation of the adventure: I rebuilt a brand new HA on the Latitude E6530 / SSD / 16 GB of RAM that I had in stock.
It’s day and night ! Nothing to do with Synology in runtime.
I didn’t make the mistake of putting all the containers back on this new server, but I was still happy to see the difference. It is incomparable. A restart will take less than 2 minutes, or even less than 1 hour!
I will therefore reassign my NAS for its primary function, storage!
Thank you for all the good advice that allowed me to take a step forward with Home Assistant.

1 Like

HA used to use a very small power and memory and work great with RPI3. But recently I found that even on RPI 4 with my growing device and need it will not sustain HA. microSD will surely not recommended since you wont be able to have it running too long. Even running RPI4 + SSD Sata M2 is not very good when you have so many thing connected.

NAS + VM is not very great either, my NAS 918+ with 16GB memory and test it does not making it good. So recently I add a new ASRock 300X running Ryzen 4570G now that is something else all together. hahahahaha (I know its over kill)

Today I split my system into 3 server The Synology running several system like Plex, MQTT and my Ryzen running all the needed container for HA. My old RPI4 will be running a simple HA + all the require BT sensor/room assistant also acting as my proxy server.

However if you like to run RPI4 then go with M2 SSD instead of microSD and ditch the RPI3 (it wont last) hahahahahahaha

1 Like

OTOH, running Hassio in an actual virtual machine, with limited RAM, and running a fair amount of addons inside Docker inside that virtual machine, as OP was doing, doesn’t sound like a very performant setup in any case.

I’m running Hass Core and about 13 other Docker containers (including 3 databases) on a relatively slow NAS and it’s not even breaking out a sweat.

1 Like

That’s the way I am doing it to, and also the preferred way IMHO. If your HA VM gets borked, your other services will not suffer from downtime when you need to restore it.