Moving HA to better hardware

So, I’m tempted to get HA running on my Mac or a small Raspberry Pi or a HA Green yet I wanted to know what I should be mindful of such that I could easily migrate my installation to better hardware in the near future. For example, I plan to use Frigate and will likely want much more powerful hardware for HA at that point. So, might you have a tip or two?

Thanks!

R

If using HA OS or HA Supervised, you can just take a backup and restore on new hardware. If using HA Container you can do too, but you have to restore manual (untar the backup in the correct directory)

1 Like

Keep it simple.
I am running my Home Assistant server bare-metal on an Intel NUC i3 that I bought on eBay for about $100 a few years ago. (That’s less than a new Raspberry Pi plus case and PSU). The procedure is usually simple, just do a full backup, save the .tar backup file, install Home Assistant on your new hardware then restore from backup. (I can’t advise on containers or VMs because I don’t use them for Home Assistant).

1 Like

Agreed, buy a simple NUC. I bought a refurbished i5, upgraded to 16 GB memory and a 500 GB disc. Costed me about 200$, well worth the spend. Now it’s been constantly on for a couple of years and never caused me any issues.

1 Like

First decide if you want to have all on one server or spkit it up on two or more, like moving Frogate and HA on to separate hardware.
If multiple servers are chosen, then Raspberry pi might be fine for the HA part, while the Frogate should probably be something more powerful.
If it is on same server, then Frigate will pull that requirement up.
Remember that RAM is actually a lot more important than CPU power, so go for 16Gb or more.

Now you mention a Mac in your post. Is it your desktop machine, then do not use it.
If it is an extra machine not being used at the moment, then look into installing a hypervisor or Linux and use that.

1 Like

If you want to run Frigate, I’d steer clear of those.

Like others have suggested, go with a Nuc or your Mac (if it’s not being used for anything else and can run 24/7). I moved from a Pi to a refurbished laptop with Proxmox and haven’t looked back since

1 Like

Mine, too. Somehow, one frequent visitor here seems to think that uptime measured in years indicates poor maintenance. ???

I think it is a question about how you define uptime.

  • ISPs often define it as a percentage of the possible uptime.
  • Data centers might define it as unscheduled downtime.
  • Device manufacturers define it as time of uninterrupted service.

When we work with sensors, as in home automation systems, then it is often the last definition that is used, because the others can be difficult to measure or will require extra external systems.
If it is the last definition, then a device that have been running 24/7 for years without restart, means a device that have received no kernel updates and often also other kind of updates.
Such a system I would consider poor maintained too, because even though the system is unchanged and still running, then the rest of the world have evolved and that should almost certain have warranted an update.

2 Likes

Feel free to tag me when you talk about me, Stevemann.

[\e]0;\u@\h:\w\a]\W $ uptime
10:12:21 up 231 days, 21:25, 3 users, load average: 0.04, 0.01, 0.00

This is HAOS on bare metal. The OS, Core, Supervisor and all integrations are updated as available. None require a reboot. Are there kernel updates to HAOS that aren’t covered by the core and OS updates?

My other Ubuntu servers and two Raspi’s do get updates often requiring a reboot.

The OS, like HAOS, contains the kernel updates and updating those will require a restart of the system.

It is the most basic part of the OS, which everything else builds on top of and in order for this to be updated it needs to be released, which means everything else also needs to be released.

That means if you are running HAOS 13.2, which is the newest release from October 15, then you would have restarted around a month ago.
Either you are not on the newest version or you sensor is not measuring uptime since last restart.

2 Likes

This is the part I don’t understand…

If it patched, the binaries have to unload and reload. HA can’t hot patch like Win svr 25 (unless it learned a trick I’m not aware of) sooo. Unless a restart happens? If. You update a HACS integration and have to restart HA to load it? The time it takes to reload HA is deducted from uptime.

5- 9’s (99.999% uptime and usually the highest level anyone wants to actually pay for unless they’re DOD…) is only 6 minutes per YEAR.

So while the number is impressive I’m not sure we’re speaking the same language… unless you know something about hotpatching the os or HA core I don’t?

None of the HAOS, HACS or core updates require a host reboot. If there are some critical updates that I am missing, tell me what or where they are?

Ahhhhh ok i see the problem.

Host v os v core.

You’re saying it’s up because the host never restarted.

Not correct for most understanding of the term. Your Host can stay up all it wants but the minute the OS bounces your uptime is stopped. Uptime is only counted if the OS is up.

1000004077

Steve it’s like someone calling a ground a neutral. I KNOW you know they’re not the same

Well, it’s been constantly on, but of course the os and the core has been re-started. Mostly running the latest version of HAOS.

If the OS crash, then the power is still on, but the system is definitely not up anymore.

(Not an argument against NathanCu’s comment here, but an example to it)

1 Like

Back to the OP’s question.
A micro PC running HAOS on bare metal is still the most stable setup.

It does not have to micro, but agreed a PC is just a more integrated set of components than a Raspi with a third party case/cooler, a third party ssd, a power supply just at the possible max rating and so on.

Sometimes a PC is a bit overkill for HA though and then a hypervisor, like Proxmox, is a good second alternative to get a better use of those resources.

1 Like

Just as stable as running a hypervisor, with the hypervisor giving you lots of extra functionality and benefits. Hypervisors would not be one of the biggest changes in computing the last decades if they brought instability.

Everything you do in your daily life, be it online banking, streaming, getting groceries you name it, you interact with something running on a virtualized platform somewhere.

Yet for some reason you are convinced it is a bad thing, even though you’ve stated multiple times that you’ve never tried it. :slight_smile:

Yet another thing to learn.