Best config hardware ~$1000 for HA

Hello. Can you suggest the most optimal PC to use Home Assistant? I am using addons:

  1. espHOME.
    I have ~20 sensors with updates every 500-100 msec. There will be more to come.
  2. influxDB.
  3. Node-red
  4. Daily full backup
  5. HACS tapo integration for switch
  6. Local tyua
  7. ESP32-CAM
  8. Other addons will be used as needed.
    Today i use:

user@debian:~$ screenfetch
,met$$$$$gg. user@debian
, g$$$$$$$$$$$$$$$P. OS: Debian 12 bookworm
, g$$P$“” “”“Y$$.”. Kernel: x86_64 Linux 6.1.0-20-amd64
,$$P’ $$$. Uptime: 23h 19m ',$$P,ggs. $$b: Packages: 872
$d$$' , $$P' . $$$ Shell: bash. $$P d$' , $$P Disk: 102G / 207G (52%) $$: $$. - , d$$$' CPU: Intel Pentium 2020M @ 2x 2.4GHz [57.0°C] $$\; Y$b._ _,d$P' GPU: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) Y$$. ."Y$$$$P"' RAM: 1144MiB / 1785MiB $$b "-.
Y$$ Y$$.
$$b. Y$$$b.
"Y$$b._ “”“”

I use SSD today also.
Backup time (About 10GB) is about 30 minutes.
At the moment when viewing the stats via influxDB 30 days - my PC freezes intentionally. Also long time to load History from HA. Taking into account the potential expansion of the number of sensors and the general increase in load - it will be more stressful on the current PC configuration.
Any information is of interest. Which is better for Home Assistent AMD or Intel? And other info. Do I need to buy a video card something around GTX4090 or not? Need a VERY fast SSD drive (IO or speed write or speed read)?? What else?
Configuration budget is $1000.

What can I say but… Wow!

Buy yourself a used Intel NUC i7, probably about $300 after you max the RAM and SSD storage. (I have five NUCs here, and only one was purchased new. Everything else was from eBay).

My Home Assistant is running on an Intel NUC i3.
5 cameras
60 or so ESPHome devices
Node Red
Tuya sucks, I don’t use it.
I don’t know what a tapo integration is.
15 Add-ons
42 Integrations
Too many devices to count.

Samba Backup add-on is used for nightly full backups.
Samba Share add-on for general file access (the server is in my basement).
My last full backup is a bit over 6Gb, and it takes sometimes two minutes to finish.
A Node-Red deploy takes, maybe 5 seconds.

I run native HAOS on bare metal. That is, I don’t use containers or virtualizations to complicate the install or add potential points of failure. The only time the server host has rebooted was when the UPS ran out of battery during a power failure sometime last year.

It won’t use the GPU unless possibly if you want to run a bunch of cameras or LLM for AI. And any PC that will do UEFI BIOS will run HAOS natively. I think I have $200 in my HA machine including the 1tb SSD. Anything bigger than a 10-YO desktop PC is just wasting computer cycles.

Get a used computer cast off Dell from some business from a computer reseller without an HDD add your own SSD and you are good. Give the other $800 to charity.


Thx for the info, But my sensors update every 500 ms (i need set it to 100 msec[The volume of information will increase 5 times ]) and it will be more. I think that After a while this will be a huge amount of data and I will need to review it quickly and zoom in on the graph in influxDB.
I got a 10GB backup in just 3 months.

We are supposed to set budgets? That’s not anywhere in the manuals. :money_with_wings: :money_with_wings: :money_with_wings:

I’d recommend a NUC too, simply because you seem to be component oriented. Upgrades will be easy and integrations into larger storage won’t be difficult. Back ups can be server based if you are literally storying 10gb in 3 months on sensors alone.

My system records to 36TB raid system and it’s overkill.

1 Like

With a 1000$ budget I would look into buying hardware with a decent CPU, some storage and enough RAM (64GB+) and run it as a hypervisor, installing HA as a virtual machine. That way you don’t “waste” compute resources on HA. Added benefits are snapshot capabilities and the possibility to back up the whole machine.

With a budget like that I would actually buy two machines, like 2x Intel NUC13 I3 or I5.
One for HA on a hypervisor, so you can run other VMs too and one for InfluxDB with a large HD.
InfluxDB can grow really big and cause havoc with backups and space usage on the HA installation, so moving it to its own hardware will make it easier to handle and a DB should actually not run in a VM at all, especially not in a container in a VM, which is what the addon in a HAOS installation running on a hypervisor will do.

The question lacks clarity and seems you are just heading in a misguided direction.

Taking the first point as an example, the requirement of 100ms response. How fast is a single request made by EspHome in your current system to a server? I bet it will be comparable to 100ms already.
And this is just a tip of a tech stack and resources needed to do that. You cannot go faster than espHome on your devices do requests through your net. PC here supposed just to be better than a potato (~100$ at max), It will show close results to a PC for 1000$.

That is why 1000$ PC isn’t your first concert if the requirement is real. You need to measure EspHome devices and your net perfomance before start thinking about the server’s speed and price. Other requirements have alike problems, I guess, there are other preformance problems, your server hardware doesn’t look like a bottelneck for that tasks.

1 Like

What scenario would require 100-500ms sensor update interval?

This also adds the benefit of being able to migrate the workloads from one box to another. With a ethernet based zigbee coordinator you’re not locked to a single box either.

I measure rapidly changing data such as pressure and temperature

If this is for some sort of industrial control, I would strongly recommend against using ESPHome and HA.

What home automation process needs sub-second granularity on temperature and pressure readings?!

1 Like

Not industrial, purely scientific interest. Although the process itself is quite complicated (frosting).

I run an aeroponics setup. I have spray patterns that that loop in subseconds. The system only runs for few seconds or so every 10 to 15 minutes. I use pressure sensors to check that the pumps are operating. I need the to catch these small bursts. So they update every 1/2 second.

If a 100 ms response time is not even average and more like a guaranteed 95% reaction time, then the price of a PC should be the least of your worries. In my opinion, achieving such speed and reliability with HASS + Esphome may prove challenging since they have no the desired perfomance themselve to give it to a system built atop of them. Hence, opting for a wired, simplified system operated by a microcontroller could be a more dependable choice.

ESP32 can work used ethernet wires.

To eliminate a lot of potential details that could hinder the idea, it is important to clarify how long one single request with the setup takes in exactly your system. Specifically, with Esphome running on an ESP32 and sending data via wired Ethernet to Home Assistant (HASS). So, how many mss it takes on average(90/95% rate)?

TCP/IP is not the right protocol for something like this.
A single collision and you might have a packet drop, which would normally take up to 3000ms to be detected.
A single setup of a WiFi connection can easily take 70-80ms.
Some ESP chips only have one core and a request for update on the API might make the runtime due to time slicing on the CPU longer than the 100ms.

100ms indicates a requirement for dedicated lines from sensor to CPU and a CPU that is streamlined to this task, which might mean an one-way communication to HA, so no control from HA.

I dont know how to send the data in HA from ESP. It use TCP or UDP? It has buffer for data? I hope there the developers HA for help.

Erm… you asked “PC to use Home Assistant” in a HA hardware topic and said " I am using addons: espHOME. I have ~20 sensors with updates every 500-100 msec." How do you “dont know how to send the data in HA from ESP.”? I’m lost here.