I thought I’d document my Raspberry Pi to NUC journey in case it provides any help to others. I’ll provide some links and resources along the way, but this isn’t a how-to guide, as I was able to get it all working with some well-placed Google searches and, frankly, why would I want to ruin the joy of discovery for anyone else?!!
Like probably most of us, I started out running Home Assistant on a Pi3b+. A while back my solution to SD Card death was to get a second Pi3b+ and install Ubuntu 18.04 for direct boot from a SanDisk Extreme PRO USB stick with a MySQL database server. It worked fine, but the main device for automations and everything else was still a Pi3b+ and as I added more and more to it, well… its days were numbered.
Enter the NUC. I wanted something small enough to fit in the tech “cans” in the laundry room where all of the other home-tech is located. Airflow isn’t awesome, but the space is big enough that as long as the device can move it’s own heat, it should dissipate well enough. I finally settled on a NUC8i5BEH (Tall) with 16GB of RAM and a 250GB Samsung EVO Plus SSD drive. The i5 processor seemed like a good balance between performance and heat, given the small-ish place it would operate in. I chose the Tall version for whatever boost it might offer in thermal management. On the disk, I’ve always had good luck with Samsung SSDs, so that seemed the natural choice.
With the hardware selection out of the way, it’s on to the OS…
I really wanted something I could have a little more control over, but I also really like the ease of managing the Supervised HassOS. So, again, I went with what I know – Ubuntu Linux. In this case, the native “bare metal” OS is Ubuntu Server 20.04 LTS with KVM installed for the hypervisor. I felt this choice would be give the most flexibility over HassOS, but also still leave enough additional capabilities to adapt the server to future needs. A few key benefits as I see them to running HassOS inside a virtual machine:
- Ease of maintenance since “Everything just works” in HassOS – no hunting for drivers or Python modules. Everything I’ve ever tried to do in HA so far has “just worked.”
- Flexible hardware resource management – HassOS needs more memory? No problem. More disk? Easy peasy. More CPUs? Done.
- Easy direct access to HassOS console – for those troubleshooting scenarios where you need to get access to the Home Assistant Core Docker container. Sure there’s the HassOS SSH debug interface, but with
virt-viewer
I’m able to connect to the “native” HassOS console even if the HassOS guest network is down.
Resources:
-
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/index – this was very helpful to get some of the KVM concepts and the feel for working with
virsh
to manage KVM. - https://help.ubuntu.com/community/KVM/Networking#Bridged_Networking - Some Ubuntu-specifics, especially for getting a bridged interface up and running so HassOS is on the same VLAN as everything else it’s controlling.
- https://www.tecmint.com/install-and-configure-kvm-in-linux/ - Another “how to get started with KVM”. I really didn’t want to be reliant solely on GUI-based KVM administration, so I paid special attention to the fourth part – using the CLI. Combined with the Redhat guide above, I started to get the hang of it.
- https://linuxconfig.org/how-to-create-and-manage-kvm-virtual-machines-from-cli – one more resource on managing KVM from the CLI.
HassOS
After getting Ubuntu and KVM ready, getting HassOS installed as a guest OS ready was as easy as following the instructions on the https://www.home-assistant.io/hassio/installation/ page. Again, another choice to make, but my guiding principle is future flexibility. Let me set this up first: the downloaded HassOS QCOW2 disk file gives about 6GB of drive space to the HassOS guest. That’s not enough. How should I give it more?
- Just “grow” the HassOS disk file?
- Make a second disk file?
I went with “make second disk file” attached to a second drive controller. This gave me “sda” and “sdb” storage devices in HassOS. This will put all of my data on a disk file that is distinct from HassOS in case I need to blow the OS away for some reason. I then used the HassOS datactl
tool as described here: https://github.com/home-assistant/operating-system/blob/1c991c229db3ae8935ede028361bc6274159eb4c/Documentation/partition.md
The view from the KVM host:
And from inside HassOS:
Here’s the summary on a few other points:
- I purposely undersized my HassOS – memory, CPU and disk. Then I could test my newfound skills to “upgrade” it in place. I started with a 20GB data partition and followed any of several good guides out there on how to “grow” the disk file used by the guest OS. Same with CPU and RAM.
- I now have 4 vCPUs, 6GB of RAM and a 50GB data partition assigned to HassOS
- I have flexibility in connection options to manage HassOS
- SSH terminal in the HA frontend (courtesy of the “Terminal and SSH” add-on)
- SSH on 2222 to the “Terminal and SSH” container
- SSH on 22222 to the native HassOS (not in some container).
-
virt-viewer
for if things get really bad.
- Home Assistant’s data partition is isolated from the OS
- I worked on the following KVM skills:
- Dynamically changing hardware resources such as RAM, vCPU and storage
- Starting and shutting down guest instances from the CLI
- Temporarily attaching and detaching USB devices from the CLI
- Permanently attaching USB devices (especially ZWave controller stick) to the HassOS config
- Building configuration snippets and “known-good” configuration templates
Where’s my data?
As I mentioned, I had a dedicated Pi just for MySQL to avoid SD Card death. In my new setup, I decided to run MariaDB natively on Ubuntu Server 20.04. Sure I could install the MariaDB Docker container under HassOS, but I felt like there might be some other DB needs in my future – maybe I’ll do some ZoneMinder when my current camera NVR dies, maybe I’ll do some other things. Anyway, I wanted the DB server as close to bare metal and as accessible as possible, not buried in a Docker container inside of HassOS.
Once I got the database server up and running, moving all of my data over was as easy as a Google search to turn me onto mysqldump
: https://stackoverflow.com/questions/6283301/move-mysql-database-to-a-new-server
I told my wife “OK, Home Assistant is gonna be down for a while.” She didn’t much care. I shutdown HA Core, but left the Supervisor and the Host running. Then, I moved the database and this was the easiest part so far. And in case anyone’s wondering: MySQL and MariaDB are fully compatible, so the dump from MySQL imported flawlessly into MariaDB.
Time to move HA
I don’t know if it was necessary or if the restore would have taken care of it, but before I did the move, I set up my additional HA Add-ons, consisting of:
- Check Home Assistant configuration
- File Editor
- Node-RED
- Terminal & SSH
I made and downloaded a full snapshot backup from the old instance. See https://blog.plee.me/2020/05/uploading-a-snapshot-to-a-fresh-home-assistant-os-instance/ for some tips, but this method proved to be too much like work.
“All ya gotta do is”…
- Download the file from the old instance
- SCP it to the new one in the correct folder
- Issue the
ha snapshot reload
command through the command line - Restore from Snapshot
Most everything worked! I was surprised; I even told my wife “OK… this is the first attempt. We’ll see how it goes!” Hue lights, cloud-based integrations, Wemo switches. Everything!
I did have a small bit of trouble on the following points:
- The old database server was still referenced in the
recorder
configuration. This caused problems withrecorder, logbook, history
anddefault_config
when booting up. - MariaDB’s default configuration is to listen only on
localhost
, which took me about 5 minutes to figure out and then to reconfigure. - I was still on a “new” IP address so a few IP-based integrations such as Roomba didn’t work.
- I still hadn’t connected the ZWave stick
When I switched over the IP address to the one used by the old HA server, Roomba started working right away. When I added the ZWave stick to the HassOS guest VM and rebooted HA, all of my ZWave stuff started working. All of my Node-RED stuff is still in place. In fact, I have so much free time today that I could even write this up for … like … the 3 people who might find this interesting!
What am I gonna do with a couple of RPI3b+'s? Probably this: https://opensource.com/article/20/6/kubernetes-raspberry-pi. Why? Why not?