Learning is not a bad thing!
Not if it intimidates users new to Home Assistant.
Now you’re just making things up
Not really, some of this intimidates me. I came to ask a very similar question.
Have been running HA for quite a while, buts it’s rapidly becoming more key with quite a bit of new smart hardware added. We a bit bigger than domestic
Various Shelly stuff, plus smart dishwashers, washing machines, iRobot, UniFi network (14 APs, 18 switches), Sonos and on and on but it’s now partly controlling the climate on the pool via Shelly. Plus various alerts, schedule and triggers.
It’s currently run on a Rasp Pi (I think v4) but all data on the SD card. But starting to realise I need something more robust. Hence I have the same ‘what hardware question’.
Budget isn’t critical. We have a Synoolgy 920+ NAS that serves music and runs Monocle to serve Dahua CCTV and UniFi Camera to Alexa Show. I did start to spin up Docker and install HA on the NAS but quickly realised that hardware redundancy would be expensive.
Whereas a second RaspPi in the drawer preconfigured would be cheap as chips. Falling over and needing a reboot isn’t a worry (this isn’t NASA) but days of downtime sorting new hardware would be problematic.
So do I deploy;
- two RaspPi5s with SSDs and a backup to the NAS
- Two NUCs in which case what
- Hypervisor? (More to learn)
Just never been a hardware guy so most of what has been discussed hasn’t really help narrow an answer down.
I can happily through £400 at it, more if needed, but want to be able to get back up and running quickly if something breaks.
Dave
Get one or more decent NUCs or similar. Deploy Proxmox. Set up snapshot backups to your NAS through for example Veeam community edition. Done. You can now restore in a matter of seconds / minutes if you have any problems. You can probably set up high availability also (I don’t use Proxmox, I use VMware).
Regarding Proxmox there is tons of guides available, just do a search.
F. Sorry man I’m with everyone else
Sometimes people just don’t - want- to deal with virtualization and most people aren’t it pukes like us. In fact I PREFER running my ha box on iron unless I have a reason for it.
I going to Proxmox myself ina few months but that’s ONLY bewi want to run local ollama on a NUC 14 AI. And that’s going to need it. But it’s also an advanced workload that requires it.
Because of stated reasons simplicity I don’t want to deal with YET another system. Etc etc. But yet I will for the local AI. If I weren’t doing that I wouldn’t even consider it. I’ve got restore to the nuc 10 down to about 30 min fro bare to fully up and restored. (I test DR quarterly) - I simply don’t need anything else. Until ai comes.
HAOS right on the NUC is a perfectly valid install option. Sure you may leave perf. on the table but sometimes sanity is worth it.
I never said it’s not a valid option to install HAOS on bare metal, I pointed out that the claim that it is the «most stable» is incorrect. It is perfectly fine to do bare metal HAOS, but if you want something more advanced, which gives you a lot of benefits (and can give you better uptime and much easier/quicker recovery) it is better to go with a hypervisor based setup.
Bare metal will always be more stable just because it has one less layer of software that can contain bugs.
Hypervisors will make administration easier though, because it will make the VM installation hardware indifferent. HAOS is pretty good at being hardware indifferent though…
I don’t agree, hypervisors today are just as stable as bare metal, even if you add a minimal layer in between. A minuscule performance drop compared to bare metal, sure, but not stability.
An extra layer will always give extra possibilities for bugs and that is also the case here.
Just look at the number of updates for the Proxmox and that is just the software side of it.
You also have an administrator that have to handle an extra software suite.
It is a scientific proven fact that there is no perfect world.
My statistic teacher always said that you can invent the perfect system, that not even an idiot can destroy and god will invent a better idiot.
I disagree, in a home lab environment you don’t need an extra resource to manage a hypervisor. Sure, you would need to learn something new but I only see that as a positive thing. The benefits you gain are massive, and as I said you will not notice anything stability-wise.
You can disagree all you want and reject reality.
And you might see it as a good thing to learn something new, but it is still something extra to remember.
You cannot get 101% stability is my final comment. It’s not “rejecting reality”. If something is 100% stable, you cannot get any better stability.
You can’t even get 100% is my my final comment.
I’m at 100% stability on my hypervisors for 15 years. How’s your stability on bare metal (it’s not allowed to say better)
It is still not 100% for hypervisors.
100% means 100% always, in the past, now and in the future.
You might be lucky and have not experienced instability yet.
Your argument is like saying the earth will never get hit by a meteor that will wipe out humans, because it has not happened during all those years humans have existed.
And the same cannot happen to a community maintained OS, right? There’s nothing magical about having hardware access directly.
Of course the same can happen to HAOS, but it is not like you are not using HAOS, when you use a hypervisor.
The hypervisor is an addition, not a replacement, which is why it will always be less stable.
How much less stable can be discussed.
My point is that it is not a good argument to not do virtulization based on stability. It is just as good as running bare metal stability wise, and with a big amount of added benefits. If you are concerned about stability and uptime virtualization is the better option compared to bare metal, because it has features built in to address this.
Virtualization improves uptime, but not stability.
The improved uptime is on the planned maintenance, due to abilities to move running VMs to other hypervisor hosts, and on ease of setup, due to to setup being abstract from the actual hardware.
Instability is unplanned downtime, cause by bugs and misconfigurations.