Which way to go - NUC - Home Assistant

Most of my links are in French… but this English one is very comprehensive for Proxmox+HA installation :
Installing Home Assistant using Proxmox - Community Guides - Home Assistant Community (home-assistant.io)

You should know that Intel NUC bluetooth component usually not works in Proxmox. I succeded in some way to compile the linux for it but it was not working well inside HA VM and was lost in linux kernel updates. So I recommend to pay a few bucks for a bluetooth USB dongle.

On my Intel NUC, I had also to work on the SD reader driver, but since then it is OK (useful for my strictly local backups).

Reverse Proxy is may be out of your range and useless if you use Nabu Casa. In my case, it is useful as I project to have several services, each one accessible from outside by a different subdomain address (and behind that each one on a different local IP). I don’t know a comprehensive tutorial for this. You might look at this steps:

  • [coming later]

Been reading and found out that people had problems with usb Z-stick and Conbee when moving from Rpi to NUC.
So my question: Will the Z-stick gen 5 and Conbee II work on VM or Debian? Just plug-in and maybe change the hardware device-path or is it much more complicated?

Sorry, not really an answer to your question but a solution there:

I have not such a problem for my Zigbee stuff… because I use a Zigate dongle which is provided with a Wifi component so my dongle is seen as its own local IP such as a Philips Hue Dongle. That works great with my Home Assistant virtualized on my Intel NUC.

I recommend the NUC8i5BEH, you will have room for another drive just to store camera footage

Thanks for the idea.

I do have a Synology Nas (not one that I can do Docker on) that my plan is to store Camera footage on.

So Z-stick/Conbee: VM or Debian?

THE PRINCIPLE

Your home has only one public (IP) address.
The domain provider of your choice will transfer every connection to any of your subdomains exclusively to your proxy machine.
Your proxy machine will manage the distribution of the connections to every service installed at your home according to their respective subdomains addresses (of your choice) and services machines local IPs (to be static in order to be unchanged as you will have to writt these local IPs in the reverse proxy configuration files).
As the reverse proxy machine will manage any entering connection, a SSL wildcard certificate will authentify at once all the subdomains adresses of you domain and thus all connections from outside to any oy your subdomains/service machines will be recognized as compliant https connections.

THE QUITE GENERIC REVERSE PROXY PART

I can publish my config files if there is a demand.

SSL CERTIFICATE PART IN ORDER TO MANAGE SSL CERTIFICATE TO MULTIPLE
OVH SUBDOMAINS=>SERVICES AT ONCE (WILDCARD CERTIFICATE) INSIDE THE PROXY SERVICE

  • I own a domain by OVH for 2€/yr - I can’t tell what to do with other providers… but the principle should be the same… and Google is full of dedicated tutorials.

  • I chose to go for a “DNS challenge” for the certificate, which go through the provider DNS with the help of a provider API, so that to prevent from letting a port opened to enable the challenge check periodically.

  • Here is a quiet comprehensive article for the installation of the SSL certificate on the Nginx Host:
    Get a Let’s Encrypt Wildcard Certificate (florianjensen.com)

  • There is some steps at OVH provider side to manage potential Dynamic IPs from your internet provider but I have only a French link for this:
    Paramétrer un DNS dynamique pour son nom de domaine | Documentation OVH

Comments about why not doing a more simple configuration:

  1. The way I do, the SSL certificate (managed in the reverse proxy container) works for every service/subdomain/machine I use, without ever need to configure anything in these service machines.
  2. One may notice that Hass.io has a Nginx add-on. Nevertheless I wanted to make a more independent and free installation of the proxy service with full control of the OS that host it, moreover managing several services at once and not only Home Assistant access.
  3. One may notice that Hass.io has a Let’s Encrypt addon. Nevertheless I wanted my SLL to manage all my services under as much as subdomains as I want, so again I opted for a more independent and versatile option.
  4. Nabu Casa manages very well every aspect of Home Assistant from outside (domain, SSL, Google Assistant) but again I wanted to build a multiple subdomains solution for multiple services.
  5. One may think that Nginx could be simply installed on Proxmox OS. That’s not false but:
    i. firstly Proxmox OS is quite sensitive to additional installs (I crashed it when attempting other complilationsd in it) so it is recommended to let it alone… indeed there is no gain to install Nginx on the host rather than on a dedicated container (as CT shares the resources with the host);
    ii. secondly I would not be able to separate easily Proxmox service and nginx service when accessing them from outside if I install both under the same machine/IP.
  6. As soon as one personal subdomain+SSL certificate is operational, Nabu Casa is not even necessary for Google Assistant integration, as explained in Home Assistant Documentation.

Depending on your use, such simpler solutions might fit you more than mine. Anyway you have the choice now :grinning:

1 Like

I would repurpose the rpi as a z-wave/zigbee to network gateway and place it where it would get the best wireless coverage and least wifi interference, then place the nuc with the network infrastructure

2 Likes

Did you give any thought to running as a separate VM rather than as a CT? I’ve read some have security concerns over CTs and recommend that the separate VM is more secure, but is it needless overprotection?

I read about this and I considered it. Indeed the installation would be the very same with a VM. But I have got only an Intel NUC with Pentium CPU and 8 Gb RAM so I share it carefully. Moreover it seems to me a quite general security comment more than a strong weakness ; I did not implement the strongest protection (medium I would say) on my global installation anyway ; and there would be other more useful security actions I would take before this one, should I want to increase my security level…

1 Like

I have my home assistant running in docker on lubuntu:

In regards to SSL. I run caddy on my (windows) file/media server to be a reverse proxy for HA and several other services. I then simply run HA as http inside local network.

Okey, thanks for all the input guys.

I setup a VirtualBox on my laptop and tried it out for a bit and got Z-stick to work.
I will also try out Proxmox later when I got my NUC and see which solution I will go for.

More inputs on Z-stick and Conbee. Do they work without problems on VBox and Proxmox?

I recommend the NUC8i5BEH, you will have room for another drive just to store camera footage

Hello - I am also looking to set-up a more powerful option that a RPi4.

Is this still a good option for a HA install on Proxmox. I will also run NodeRed, InfluxDB, Grafana, maybe PiHole and use as a basic file share.

If NUC8i5BEH (or another option?), what RAM and SSD are recommended?

Thanks in advance for any working tips / ideas.

I use a HP Gen8 Microserver with TrueNAS. I run HA, MQTT, Grafana, TasmoAdmin, NodeRedand a couple of others as plugins. TrueNAS is open source, as are all of the others, so the only cost is hardware. Yes, it is more expensive but you get the security of a commercial grade software which uses ZFS software RAID spread over a maximum of four disks. In my case, four terabytes of storage.
All that software does need 16G of RAM and a Xeon CPU but are quite cheap now. The HP is server grade quality hardware.

The 8i5 is still an excellent choice, if you are running multiple DBs and value good DB performance, get a Sabrent Rocket 4 1TB. If you want reliability at the expense of performance, Samsung 970 pro 1TB.

For SSD reliability with database use, never use more than half of its free space, if you need more space for file storage, make use of the SATA slot for an additional drive, Samsung 860 pro 2TB is the best choice there. For write once read many you can go to to 90% use, for constant write (like video surveillance) 80%. The 870 Evo 4TB gives you more space at a similar price but a lower life cycle and reliability rating.

If you need more than 4TB for file storage, you should not be using a NUC, you should use something with a RAID controller or a computer that can do mirrored drives

For RAM I suggest CT2K8G4SFRA266

Thank you for the tip. I have ordered an 8i5 and will get the 2x8 RAM.

I’m still getting to grips with recent specs (been a few years since my last build). Just looking at the SSD. The NUC8i5 m.2 slot is PCIE Gen 3. The Rocket 4 looks great, but will the PCIE not limit the speed? So, would the lower spec SSD make sense? Or, is the Rocket 4.0 just a solid performer regardless? There is a c.40% increase in cost for the 4.0 versions.

Or am I misunderstanding the specs? Any thoughts?

Thank you again

To compare:

1TB Sabrent Rocket NVMe PCIe Gen3 x 4 (3400 Mb/s) at £120
1TB Sabrent Rocket NVMe PCIe Gen4 x 4 (5000 Mb/s) at £160

Or, 1TB WD_BLACK SN750 Gen3 x 4 at £110

Does the Sabrent PCIe 4.0 have any benefit in the NUC8i5 over lesser drives? I understand PCIe 4.0 is backwards compatible so I know the SSD will work. But, is it worth the extra cost given the NUC is limited to Gen3 speeds?

I ask as I am interested and struggling to find any other advice on this / other forums.

The speed will indeed be limited in max sequential performance by the host bus interface. But. The Gen 4 has better performance regardless of the host connection, it is worth the cost difference.

When dealing with databases, you will never hit anything close to host bus speed, what you want is low latency and responsiveness. You want random write latency to be as low as possible. The 4.0 has better internal algorithms and performance converting the SLC cache to final NAND writes. The 20% faster controller to NAND interface in the 4.0 also leads to superior garbage collection performance.

For reliability, the 4.0 also has a better error correction and detection algorithm, and generally has higher binned NAND chips.

The controller in the 3 vs 4 is actually almost identical, both 28nm dual core ARM chips, but there are so many little differences in the firmware and all the “better” parts that it adds up… substantially. Here are some real world performance examples:

WDC SN750 1TB:
Firefox Compile Write Speed: 165.2 MB/s
Drive-to-drive copy: 1224 MB/s

Sabrent Rocket 1TB:
Firefox Compile Write Speed: 125.4 MB/s
Drive-to-drive copy: 872 MB/s

Sabrent Rocket 4 1TB:
Firefox Compile Write Speed: 217.2 MB/s
Drive-to-drive copy: 1410 MB/s
SQLite 8-thread: 42.4s

As you can see there is indeed a substantial difference in performance, and those numbers get nowhere near the host bus speed limit. I do not have an SQLite benchmark for the non 4, but for comparison a Samsung 970 pro does it in 173 seconds, and Intel Optane does it in a ludicrous 4.5 seconds. I would on average expect the 4.0 to be 20% faster on average when connected to PCIe3, but as you can see some workloads are over 60% faster with the 4.0.

For the SN750 comparisons in there, you might say that looks quite competitive. In some random read workloads it is much slower, up to 40%, and in some workloads it matches or even beats the Rocket 4, but in some others it lags far behind the Rocket 3, such as PCMark. But there is 1 major problem. The SN750 as 1/3 the NAND endurance rating compared to the Rocket series, which may be fine for browsing facebook, but not for running a database server and automation controller.

+1 for Proxmox on NUC

Wow. Thank you. I can’t argue with those stats. Have ordered the Sabrent, 2x8 RAM and 8i5. Hope to get started setting up over the weekend.

My thought is to run Proxmox and have a VM with HA.

Do I then use addons to extend the functionality? For the home automation I currently use Portainer, Mosquito, Node-Red, Influx DB, Grafana in docker on a RPi4. Can all of these be added to HA via the add-ons? Or, should any of these be installed on a separate docker / VM?

I also note that several people suggest MariaDB. Should the DB be installed in the HA VM and if so through add-ons or alongside in a docker (or some other way)?

And, I would like to add a 2.5 HDD (maybe not a SSD) as a backup drive, like a small NAS. I guess this should be in a separate VM, maybe FreeNas? Or, I could install samba on the Proxmox host and use as a share?

Similarly, I may set-up docker on another VM for media management (Sonarr etc). I assume I just install Linux (Ubuntu) and Docker then run each in a container? Ditto for Plex.

When I write all of this, it becomes clear this may be a long weekend with long nights as I am not a Linux expert by any measure.

Any tips on set-ups that work / are recommended would be appreciated. I am not tied to Proxmox, or VM, or docker etc. Just gets lots of good feedback.

There are literally thousands of combinations of ways to set up the VMs and containers. Proper resource management is critical to getting it to perform at expectations. Having additional resources available (memory, cpu cores, etc) makes it a lot easier. You want the smallest amount of VMs as needed to get the job done, since this requires finite resource allocation. There is also a security barrier that a VM gives you beyond a container, but once again you want the smallest amount.

This is fine. From an OS manageability standpoint I recommend Ubuntu for the virtual machines and thus HA in Docker. If you want HassOS you need to use Debian for one of the VMs, since Ubuntu is based on Debian there will be very little difference in shell commands, update intervals, and available software.

Within each VM I recommend Webmin for browser based management beyond the shell, and having the same SSH public key inside each VM so you only need 1 private key for managing your network infrastructure. I have my key on a Yubikey, when I login to a system I just need to tap it and I’m in.

I would separate the VMs into “function”, by that I mean network services/management, home automation, other services, and so on. The purpose is better resource management at the hypervisor level, better system management at the OS level, better network management at the Docker level, and several other reasons which usually only become clear after using the system for a long time. Each VM gets its own network IP address (routed config).

For network services, this includes Mosquitto, NTP, DNS, Samba/NFS servers. This VM would not use Docker, it would need minimal ram and cpu resources, but high priority and availability. You could also run these on the hypervisor, but putting things that need regular updates on that can force reboots that take the whole system and all VMs down.

For home automaton, this includes, HA, SQL, Node-Red, Influx, Grafana. This would have most items in their own Docker instances within a single network, all talking to eachother inside Docker. This would need a lot of resources, databases like to eat ram, and the multitude of services needs cpu.

For other services, this includes Plex, video surveillance applications, Rhasspy, Duplicati, etc. This may be a mix of native programs and Docker, and will use more resources as services are added.

MariaDB is a fork of MySQL which has since diverged. I use MySQL, since that is what I prefer, performance tuning an SQL server can be a challenge, but is easier to manage and backup if you have the tools and know what you are doing.

Learn from the mistakes of others, including myself, make backup a central part of this plan. Proxmox can replicate/backup the VMs to another system. Do this on a regular schedule, and test the backup process just as often. So many people find out their backup plan is crap only when their system takes a crap, or that it was not properly configured to backup the right data.

Backup the service configuration files of Proxmox and each VM to 2 locations on a daily basis.
One should be on the system, Samba or NFS share from the services VM is fine, as long as they are in one easy to manage location. This can be a delta backup, so only when something changes does it use more space, and since it is config files only it is quite small, usually only a few MB. Use sqldump to export the HA database and any other critical databases at the same time. The other copy of these can just be cloned to another system.

Replicate the entire VMs to a physically different local system every week. This can be considered a hot spare that can be replicated back to the NUC if something goes horribly wrong with Proxmox, or the drive fails, or if the NUC needs to be replaced due to damage.
Backup that system to an external drive once a month and keep that in a fire safe.
Try to restore from the external drive to a Proxmox VM on a laptop or other system to make sure it works before locking it up.

When you do weekly replication, upload the current daily backup to an offsite cloud backup provider such as Backblaze. Duplicati does this quite well and will encrypt it before hand. Once the backup process is configured, it is pretty much set and forget unless you add more services or VMs.

Make sure the NUC is connected to a large battery backup, 1500VA or larger. I would suggest having all your network infrastructure together and on the same battery unit, however if you have substantial network hardware you need a substantial UPS. Mine weighs as much as I do and can run the entire network plus several POE cameras and the house server for hours.

Keep a written log of the entire setup process if you can. It makes it way easier to find mistakes, and to replicate your work far quicker if needed. I had to do my server setup 3 times, but I started from scratch knowing that, by the third time it went from days to minutes since I could just copy/paste shell commands from the log, and looking it over was a great learning tool.