Sure, but is it running on same (relatively) low performance machine as HA? Or runs on dedicated servers/clusters or even reasonable NAS with 10GB (or faster) connectivity and fast all flash storage? This is what I mean.
To some extent, this is a project to improve my networking and automation skillset. We are planning on moving to farmland soon and I’m trying to figure out how we are automating our greenhouses before we finalize our greenhouse drawings. I’ve found planning ahead is much simpler than retrofitting.
I’d love to setup a cluster of the old computers and run a dedicated server. We have a dedicated server (remote hosted) for our e-commerce site, and I’d love to understand more about how to create clusters, distribute loads, etc.
I’ve been avoiding learning docker. I want to learn it, but it’s been low on the list while I learn the other technologies. It sounds like it’s time to start running through some tutorials.
I have yet to discover a good excuse for Docker, Virtual Machines, Proxmox or other multi-purpose tool.
I suspect that most use these tools because they are too cheap to dedicate one computer to Home Assistant. OK, maybe that’s harsh. They only run a few automations and have a few devices, so they want to use the same PC for other applications.
You are proposing an industrial-strength installation. You won’t be playing Minecraft or browsing PornHub. It is a serious tool. You don’t need clusters since Home Assistant distributes the operating code across the devices. My advice is to install Home Assistant X-86 binary on an Intel NUC. An i5 or i7 should be sufficient. Buy two for a backup. Make daily offline backups. (I recently had a NUC go crazy. Rather than fix it, I simply installed the latest Home Assistant binary on it and performed a system restore from last night’s backup. My total down time was about one hour. The problem NUC, by the way, had a bad RAM. Fixed that and ran stress on it, and it’s ready for the next time I need a replacement).
As far as planning, run a 2-inch PVC electrical conduit between all buildings. If you need Ethernet or Fiber between them for any reason, it’s a simple pull. I would run low-voltage wiring, 14 to 16 gauge, everywhere that you might want sensors or other devices in the future.
I think this is a perfect use for proxmox. Clustering is great for high availability.
That’s an interesting opinion to say the least.
@stevemann Float switches are super cheap and pretty catostrophic if they fail. We will definetely have multiple of them and setting up the necessary logic.
Thank you for the ESPHome setup details on the YAML part and MAC addresses.
Im putting an order in for 20 of the ethernet devices, pretty cheap at around 3.50 each.
Why run Docker or any other virtualization package just to run Home Assistant when there is a perfectly good X-86 binary?
Because the maintenance is much, much easier. If your machine has enough power, Docker is way better than a bare metal install.
That might not fit all projects available on Docker, but for many it is valid.
Maintenance, availability, ease of use, backup, using your hardware for more than one piece of software, the list goes on. It has nothing to do with being “cheap”, that’s just silly.
My point exactly.
Buy a second PC for all the other software you want to run. I see no advantage of adding another layer of complexity with Docker, etc, that makes maintenance or backup any easier.
I recently had to replace my Home Assistant NUC- it took less than an hour of down-time. Backups couldn’t be easier as I use Samba Backup to make a full backup at 3AM evert day.
It would be a tough sell to convince me that I need Docker or Proxmox.
WTF does that have to do with clustering to achieve the level of performance needed for this setup?
Agreed- I am going down the wrong rabbit hole and not addressing the OP’s main question.
But, does 15K entities exceed the capabilities of “bare metal” Home Assistant on capable PC hardware? My modest system has over 1500 entities and I am not even using half of my available 8Gb memory. Since I am adding devices and entities almost daily, what indications would I see that my NUC is “hitting the wall”?
It’s a bit funny that you list file-level backup, hardware replacement and «less than an hour» as positive points. In a virtualized setup you would have a few seconds downtime, at most.
Not everyone is using NUCs. You have people running it off all kinds of hardware, like proper servers, NAS and so on. To dismiss one of the biggest gains in the industry the last 10-15 years as «being cheap» is plain ignorant.
A NUC with 15k entities would probably be totally fine. If you have no issues being offline in the event of hardware failure it will be just fine. If you on the other hand want high availability and low RTO you should leverage your solution to be hardware-independent (virtualized).
Any way back on topic
I wonder even if HA is capable (it probably is on the right setup), If its not better to go a different way of things.
Example:
- Log items in influxDB
- graph items in grafana
- automate things with rundeck / airflow /other CI/CD tool
I am sure you can wrestle HA into this setup to make API calls, automation’s etc. but this way you would remove much of the work load to applications that are build for a company and are scalable by default.
HA is build for homes not for enterprises that way you should keep the need for HA low and make is super easy to replace if it breaks. (and keep your company running without HA)
The reason for this is to minimize the fail over scenarios, no need for a proxmox cluster, a few simple docker hosts will do; and the software will fail over for you. (minimum a swarm would be recommended with like a proxy like NGINX/other)
any way some extra 2 cents
(TLDR; If you have a company, treat it as a company and use the right software you will be happier in the future)
Whether I had a virtualized installation or bare metal wouldn’t have made any difference over a hardware failure.
If you set it up correctly it would. Instead of your suggestion of running one NUC with HA and one with «the rest» you could «cheap out» and run for example a hypervisor on both. In your case when your NUC failed you would just move your workloads to the other one and be back up and running in notime. Even in scenarios where you run only one hypervisor it would be a lot easier to do restores, since you can back up entire workloads, not just the config.
I think what you’ll find is that for most “I just want a smart home” scenarios, all that fancy multi-container/vm stuff is overkill. But did you know that Home Assistant OS uses docker internally?
These high level tools are not just about sharing physical hardware. They are primarily about reproducible environments that can be abstracted.
Yes, but I don’t have to manage it, see it or know anything about Docker.