Which way to go - NUC - Home Assistant

I recommend the NUC8i5BEH, you will have room for another drive just to store camera footage

Hello - I am also looking to set-up a more powerful option that a RPi4.

Is this still a good option for a HA install on Proxmox. I will also run NodeRed, InfluxDB, Grafana, maybe PiHole and use as a basic file share.

If NUC8i5BEH (or another option?), what RAM and SSD are recommended?

Thanks in advance for any working tips / ideas.

I use a HP Gen8 Microserver with TrueNAS. I run HA, MQTT, Grafana, TasmoAdmin, NodeRedand a couple of others as plugins. TrueNAS is open source, as are all of the others, so the only cost is hardware. Yes, it is more expensive but you get the security of a commercial grade software which uses ZFS software RAID spread over a maximum of four disks. In my case, four terabytes of storage.
All that software does need 16G of RAM and a Xeon CPU but are quite cheap now. The HP is server grade quality hardware.

The 8i5 is still an excellent choice, if you are running multiple DBs and value good DB performance, get a Sabrent Rocket 4 1TB. If you want reliability at the expense of performance, Samsung 970 pro 1TB.

For SSD reliability with database use, never use more than half of its free space, if you need more space for file storage, make use of the SATA slot for an additional drive, Samsung 860 pro 2TB is the best choice there. For write once read many you can go to to 90% use, for constant write (like video surveillance) 80%. The 870 Evo 4TB gives you more space at a similar price but a lower life cycle and reliability rating.

If you need more than 4TB for file storage, you should not be using a NUC, you should use something with a RAID controller or a computer that can do mirrored drives

For RAM I suggest CT2K8G4SFRA266

Thank you for the tip. I have ordered an 8i5 and will get the 2x8 RAM.

I’m still getting to grips with recent specs (been a few years since my last build). Just looking at the SSD. The NUC8i5 m.2 slot is PCIE Gen 3. The Rocket 4 looks great, but will the PCIE not limit the speed? So, would the lower spec SSD make sense? Or, is the Rocket 4.0 just a solid performer regardless? There is a c.40% increase in cost for the 4.0 versions.

Or am I misunderstanding the specs? Any thoughts?

Thank you again

To compare:

1TB Sabrent Rocket NVMe PCIe Gen3 x 4 (3400 Mb/s) at £120
1TB Sabrent Rocket NVMe PCIe Gen4 x 4 (5000 Mb/s) at £160

Or, 1TB WD_BLACK SN750 Gen3 x 4 at £110

Does the Sabrent PCIe 4.0 have any benefit in the NUC8i5 over lesser drives? I understand PCIe 4.0 is backwards compatible so I know the SSD will work. But, is it worth the extra cost given the NUC is limited to Gen3 speeds?

I ask as I am interested and struggling to find any other advice on this / other forums.

The speed will indeed be limited in max sequential performance by the host bus interface. But. The Gen 4 has better performance regardless of the host connection, it is worth the cost difference.

When dealing with databases, you will never hit anything close to host bus speed, what you want is low latency and responsiveness. You want random write latency to be as low as possible. The 4.0 has better internal algorithms and performance converting the SLC cache to final NAND writes. The 20% faster controller to NAND interface in the 4.0 also leads to superior garbage collection performance.

For reliability, the 4.0 also has a better error correction and detection algorithm, and generally has higher binned NAND chips.

The controller in the 3 vs 4 is actually almost identical, both 28nm dual core ARM chips, but there are so many little differences in the firmware and all the “better” parts that it adds up… substantially. Here are some real world performance examples:

WDC SN750 1TB:
Firefox Compile Write Speed: 165.2 MB/s
Drive-to-drive copy: 1224 MB/s

Sabrent Rocket 1TB:
Firefox Compile Write Speed: 125.4 MB/s
Drive-to-drive copy: 872 MB/s

Sabrent Rocket 4 1TB:
Firefox Compile Write Speed: 217.2 MB/s
Drive-to-drive copy: 1410 MB/s
SQLite 8-thread: 42.4s

As you can see there is indeed a substantial difference in performance, and those numbers get nowhere near the host bus speed limit. I do not have an SQLite benchmark for the non 4, but for comparison a Samsung 970 pro does it in 173 seconds, and Intel Optane does it in a ludicrous 4.5 seconds. I would on average expect the 4.0 to be 20% faster on average when connected to PCIe3, but as you can see some workloads are over 60% faster with the 4.0.

For the SN750 comparisons in there, you might say that looks quite competitive. In some random read workloads it is much slower, up to 40%, and in some workloads it matches or even beats the Rocket 4, but in some others it lags far behind the Rocket 3, such as PCMark. But there is 1 major problem. The SN750 as 1/3 the NAND endurance rating compared to the Rocket series, which may be fine for browsing facebook, but not for running a database server and automation controller.

+1 for Proxmox on NUC

Wow. Thank you. I can’t argue with those stats. Have ordered the Sabrent, 2x8 RAM and 8i5. Hope to get started setting up over the weekend.

My thought is to run Proxmox and have a VM with HA.

Do I then use addons to extend the functionality? For the home automation I currently use Portainer, Mosquito, Node-Red, Influx DB, Grafana in docker on a RPi4. Can all of these be added to HA via the add-ons? Or, should any of these be installed on a separate docker / VM?

I also note that several people suggest MariaDB. Should the DB be installed in the HA VM and if so through add-ons or alongside in a docker (or some other way)?

And, I would like to add a 2.5 HDD (maybe not a SSD) as a backup drive, like a small NAS. I guess this should be in a separate VM, maybe FreeNas? Or, I could install samba on the Proxmox host and use as a share?

Similarly, I may set-up docker on another VM for media management (Sonarr etc). I assume I just install Linux (Ubuntu) and Docker then run each in a container? Ditto for Plex.

When I write all of this, it becomes clear this may be a long weekend with long nights as I am not a Linux expert by any measure.

Any tips on set-ups that work / are recommended would be appreciated. I am not tied to Proxmox, or VM, or docker etc. Just gets lots of good feedback.

There are literally thousands of combinations of ways to set up the VMs and containers. Proper resource management is critical to getting it to perform at expectations. Having additional resources available (memory, cpu cores, etc) makes it a lot easier. You want the smallest amount of VMs as needed to get the job done, since this requires finite resource allocation. There is also a security barrier that a VM gives you beyond a container, but once again you want the smallest amount.

This is fine. From an OS manageability standpoint I recommend Ubuntu for the virtual machines and thus HA in Docker. If you want HassOS you need to use Debian for one of the VMs, since Ubuntu is based on Debian there will be very little difference in shell commands, update intervals, and available software.

Within each VM I recommend Webmin for browser based management beyond the shell, and having the same SSH public key inside each VM so you only need 1 private key for managing your network infrastructure. I have my key on a Yubikey, when I login to a system I just need to tap it and I’m in.

I would separate the VMs into “function”, by that I mean network services/management, home automation, other services, and so on. The purpose is better resource management at the hypervisor level, better system management at the OS level, better network management at the Docker level, and several other reasons which usually only become clear after using the system for a long time. Each VM gets its own network IP address (routed config).

For network services, this includes Mosquitto, NTP, DNS, Samba/NFS servers. This VM would not use Docker, it would need minimal ram and cpu resources, but high priority and availability. You could also run these on the hypervisor, but putting things that need regular updates on that can force reboots that take the whole system and all VMs down.

For home automaton, this includes, HA, SQL, Node-Red, Influx, Grafana. This would have most items in their own Docker instances within a single network, all talking to eachother inside Docker. This would need a lot of resources, databases like to eat ram, and the multitude of services needs cpu.

For other services, this includes Plex, video surveillance applications, Rhasspy, Duplicati, etc. This may be a mix of native programs and Docker, and will use more resources as services are added.

MariaDB is a fork of MySQL which has since diverged. I use MySQL, since that is what I prefer, performance tuning an SQL server can be a challenge, but is easier to manage and backup if you have the tools and know what you are doing.

Learn from the mistakes of others, including myself, make backup a central part of this plan. Proxmox can replicate/backup the VMs to another system. Do this on a regular schedule, and test the backup process just as often. So many people find out their backup plan is crap only when their system takes a crap, or that it was not properly configured to backup the right data.

Backup the service configuration files of Proxmox and each VM to 2 locations on a daily basis.
One should be on the system, Samba or NFS share from the services VM is fine, as long as they are in one easy to manage location. This can be a delta backup, so only when something changes does it use more space, and since it is config files only it is quite small, usually only a few MB. Use sqldump to export the HA database and any other critical databases at the same time. The other copy of these can just be cloned to another system.

Replicate the entire VMs to a physically different local system every week. This can be considered a hot spare that can be replicated back to the NUC if something goes horribly wrong with Proxmox, or the drive fails, or if the NUC needs to be replaced due to damage.
Backup that system to an external drive once a month and keep that in a fire safe.
Try to restore from the external drive to a Proxmox VM on a laptop or other system to make sure it works before locking it up.

When you do weekly replication, upload the current daily backup to an offsite cloud backup provider such as Backblaze. Duplicati does this quite well and will encrypt it before hand. Once the backup process is configured, it is pretty much set and forget unless you add more services or VMs.

Make sure the NUC is connected to a large battery backup, 1500VA or larger. I would suggest having all your network infrastructure together and on the same battery unit, however if you have substantial network hardware you need a substantial UPS. Mine weighs as much as I do and can run the entire network plus several POE cameras and the house server for hours.

Keep a written log of the entire setup process if you can. It makes it way easier to find mistakes, and to replicate your work far quicker if needed. I had to do my server setup 3 times, but I started from scratch knowing that, by the third time it went from days to minutes since I could just copy/paste shell commands from the log, and looking it over was a great learning tool.

Thank you. That is a lot to think about.

I have sketched out how I will set-up initially:

Just to follow up:

  • HassOS - I will install per the whiskerz007 script. I think this will install the OS, Docker, HA etc ?
  • Database for HA - Does HA need a separate database? If so (or if recommended), which VM in the image above would this be installed?
  • Ubuntu - any recommendations re 18.04 vs 20.04? And, 32 vs 64bit?
  • MQTT - if Mosquito is on another VM, I assume communication between the VMs and into docker is via open ports?

Tips on keeping a log noted. I will post any snags / sticking points / mitigations to this forum as I find them. Hopefully to help others on the same journey.

Nice initial thoughts.
A few comments :

  • HA database is perfectly fine on HassOS.
  • I would not recommend InfluxDB and Grafana on HassOS however. You have a Proxmox server, some VMs; lot of space for any container. If you use InfluxDB/Grafana on HassOS you will have some limitation. One for instance : you won’t be abale to store system statistics automatically generrated by Proxmox into this databse (I don’t remember exactly why, some UDP problem,…).
  • Mosquitto out of HassOS. Why not? But why ? you will use it for something else than with HA ?

Did you consider your network achitecture also ? If you use several VLANs, try to put the VM/Services that discuss often together in the same VLAN.

Why running multiple VM? For NUC I rather running Debian and have everything else dockerizes. The main OS will then distribute the memory and processing power better? Anyway for HA just run the supervised version which will allow you to have supervised and everything else can be controlled outside supervised in docker controlled by portainer.

In my case running RPI 4 + ArgonOne M2 + M2 128GB + Debian and so far the result was amazing. The RPI 4 perform 100% better at below 50% heats without having my ArgonOne to kick in its active fans.

How would you sync the two devices? Through MQTT?

I agree. In it makes more sense to install Grafana / InfluxDB outside of HassOS.

Also agree that MQTT should be inside. I have installed via the HA supervisor along with Node-RED.

NUC server2

I now want to install Webadmin on HassOS, but outside of docker (so, on the HASSOS / debian OS as above). But, how does one do this inside of Proxmox? How do you SSH onto the OS to install, outside of docker? And, I assume I need to add sudo / apt / others to allow the install as the OS is very minimal. Is that correct?

Screenshot 2021-02-10 at 8.08.23 AM
I run everything in lxc containers, uses very little resources.