I wanted to do the above because I worry about SD cards failing and part of my reasoning for moving away from a VM on a NAS to a PI4 was to get a bit more stability (ironically).
I just cannot get it to work. I can get the PI4 to boot from hard disk using a standard PI OS. That works fine. My challenge comes when I try to boot from a HomeAssistant image. It transfers to the disk using balenaEtcher fine, however when the PI boots it goes around a loop saying it “Unable to read partition as Fat”.
I have tried a smaller SSD and I have tried a different SSD to USB converter (though I suspect the fact I can do this with PIOS makes this a moot point). I am using the 64 bit image as I noticed the recommended 32bit one did not seem to support boot from USB and the pi is on the 3rd September 2020 released firmware.
This may not answer your question but just sharing what I’m going to be working on. I’ve tested and it works. I’m installing Raspbian headless on the Pi4 and then configuring the eeprom to allow ssd boot. Once that’s done, I’m going to run docker swarm to create a docker cluster later. I plan to run the docker image of HA and Node-Red. These containers have worked very, very well for me on my Linux desktop. It’s time to move them off my desktop and onto a Pi4 cluster.
The reason I’m using docker is because it’s very, very easy to port to any system and to backup. So, I mention this because at least you can boot from SSD this way and you can have your HA docker container booting and working automatically after the Pi4 boots. I used Hassio before but love the portability of Docker. My plan is to have fail over and it’s a little easier to achieve (for me) with containers.
I suspect the HA image doesn’t boot from USB SSD yet because the kernel doesn’t have it baked in yet? I believe this is or was the issue with the Ubuntu 20.04 ARM images. I wonder if they fixed that there. If they did, I’ll be using Ubuntu instead of Raspbian.
That does make sense, I’ve come to the conclusion that I will just stick with SD until there is an official HA version of SSD boot.
I can get the PI to boot stable on SSD (found I had to use double cable for the drive I wanted) and it runs well. The goal of moving to PI from NAS was to be running on a more stable/supported version. Presumably lots of people out there are running on SD and it runs fine for a long period of time. Key will be to keep taking snapshots off of the box so that if things do go bad I can quickly recover. Not sure it that is automated or not yet though.
When there is a supported PI boot version I will swap at that point. Unless my wife is looking the other way and an Intel NUC falls into my basket by accident one day
As long as you have a solid backup plan, you can stick with SD. Once you go SSD, you’ll enjoy the new performance
@krash presented a great option using good drive. Check that out if you have google drive. You definitely want something that moves your data off the host.
The term hass.io has been deprecated, this method is now called Home Assistant OS. Here’s a good overview of the different installation methods.
Just for your information Home Assistant OS and Home Assistant Supervised also run in docker. All the add-ons are just docker containers modified to work nicely with the HA eco-system, stuff like ingress etc.
The only method that doesn’t use docker is the installation in a venv.
Now I wonder if I should have used Home Assistant OS on docker so that I could take advantage of the snapshot feature. I may try this…thanks
I noticed you’re using Proxmox (saw it in a backup post when looking for backup solutions) and curious why not use their container feature instead of a VM?
I assume you’ll run into issues with this install method when you try to create a docker swarm, but I’m not so familiar with docker swarm, I just know that HA OS is pretty limited.
Ah, I thought you were talking about HAs container feature.
I never tried it to be honest, and I think I remember people here on the forum having issues with the LXC containers.
Maybe I need to read a bit about LXC and ge4 more familiar with it. However my NUC is idling at 2-3% CPU use with 3 HA VMs. And with LXC wouldn’t I need a container for each docker container that I’m currently running? Again I’m not familiar with LXC I’m pretty familiar with docker-compose, there I can spin up a new image I want to test in a few minutes.
LOL - well LXC would be the equivalent of docker itself and you’d have your container config just like a docker compose file. The overhead of LXC and Docker isn’t different just their own terminology. I haven’t used it but have read about it and know a few people that run HA using LXC without issues.
I had the same “Unable to read partition as Fat” issue. I reinitialized the SSD in the Windows 10 Disk Manager partition software and formatted the drive as exFAT, them reflashed the Home Assistant image with balenaEtcher and it worked.