I am also looking to go the mini pc route.
I have been using the pi for over a year now and every time i restart the system i need to do it twice. The first time my cameras don’t load. The second time they do.
I have a few cameras running on surveilance station on the synology nas. But i would like to record more without the extra license costs.
I am in doubt between a really cheap one like this
The first one rates 15 watts, but the second one rates 24 watts.
The last one is 360 euro, but if i would go with a I5 or a gen7, it would be only a little bit cheaper.
The sad thing is that i need to buy from BG because they can shipp without me paying 21% tax.
The budget is there for the I7 but it is a huge difference between these 2.
It is horses for courses. If you are just running home assistant then you don’t actually need 8GB ram, you don’t need 256GB drive space and you certainly don’t need an i7. However if you want to load it up with other services, and multiple ffmpeg cameras, and image processing, and a plex server, and proxmox with numerous docker containers and so on, then yeah spend the extra money.
Another consideration is what kind of I/O performance you can get? These days, NVMe flash SSD on an M.2 connector doesn’t have much of a premium as compared to SATA SSD for the same capacity, but it has a better transfer rate. If you’re going to use InfluxDB, the history database, etc., the I/O performance could help.
The way this factors into your decision is if the system you’re looking at buying supports NVMe flash modules or not. Essentially, you’d be using a much higher bandwidth PCIe connection to the storage, and also bypassing all the SATA ATA emulation layers.
Yet another thing to consider. I’d be a shame to have the higher performance CPU that’s just sitting around waiting for blocks to come off some slow storage if that’s going to be a bottleneck. In my case, I have a dozen-odd Docker containers, some large (Plex, camera NVR, UniFI controller) and some small (just a python script doing some reading from sensors and publishing to MQTT). So I wanted enough I/O performance to have a more balanced system.
Depends on how many sensors you have and how often they update? Even with a normal usage Patten, the logbook and history aren’t real fast… some of that is likely due to how the sqlite3 db is being used by storing blobs of JSON that needs to be parsed. Other aspect is just running that off the storage medium fast enough.
In my case, I have a 32 channel power monitoring system that updates every two seconds. While this gets piped into Home Assistant, all those sensors are excluded from the recorder/history database, and just get sent along to influxdb.
I think that many home assistant users will end up with influxdb and grafana or something similar to do longer term analysis of weird things. Like how I used the outside temperature and packet loss measurements to demonstrate to my ISP that there’s some temperature-related failure in their Network, that painfully obvious when you look back over a couple of weeks of data.
Nick, That is what I initially thought also.
But what Louis says is also in my mind.
At this moment the I3 would be more than enough but as we go along, there will be more sensors, more cameras, an more and more addons. I also want to run the Unify.
The main reason I want to swap to a nuc is because restarts take long, history takes long. And most of the time I need to restart twice because the first time the cameras are missing.
As I am learning every day, the the database will grow and the list of features will grow.
My motto always has been, get the best performing unit available for the money.
That way it takes longer before you run out of performance.
Especially if power consumption does not make much difference.
So I think maybe the I7 is my best option.
Hi there,
I’m chasing the same answer .
I’m not sure if I shell go for SBC (like the ROCKPro64) or NUC. I read through the discussion and may summerise as follows:
CPU power improves boot/start-up → it is not little was is loaded
operation is mainly improved by SSD or flash (what is obviously)
lead me the question, is HA a foreground or background process (in case of CCTV is properly is)? The reason is, that if HA is installed on an external HDD, it will prevent the HDD going idle, so it will spin a time long. This, of course, depends on how HA is configured, run un-demand, e.g it act as a remote control, doing cronjobs or recording data (video or sensors) continuously.
Asking differently, what is carried out in the memory and what has to be pulled from the HDD?
on my case i didn’t go NUC but still on x86 solution and I suggest it compared to RPI-like SBC (in my case was RPI3B+).
I took a beelink N41 (N4100 celeron CPU + 4GB ram) which is rated to be a 6W TDP CPU hence consuming less power than 15/25W NUC options available.
If I had to do it again I would go same route but maybe looking for something with 6-8Gbyte ram to have a bit more slack.
I have a NUC with a dual core Celeron J4005. It runs great.
I installed Ubuntu Mate on it - the 18.04 LTS
And then followed the normal procedure installing first docker-ce
And then Hass.io
In addition I run Mosquitto. Not in docker. I see no reason. Mosquitto can be installed by a simple apt installl mosquitto and the configuration adding a username and password is dead simple. I see no reason why I should make myself depending on Home Assistant running it as an addon.
Inside Home Assistant I do run a number of addons. Most important one is deconz for my zigbee stuff.
I installed the desktop version of Ubuntu Mate because setting fixed IP and other stuff is so convenient from a GUI. And if you are not logged into the GUI its CPU overhead is nothing to talk about.
I also run a Ubiquiti Unifi controller - also installed directly on the Ubuntu using apt. Again why should my Unifi controller not be working if I have a situation where HA will not start because of a broken release.
Having my MQTT and Unifi running outside Docker means they run along uninterrupted even if I have to stop HA. That is how I prefer things
What else… I have ssh daemon running in the Ubuntu host. I have a samba share daemon running so I can edit the HA config even when docker is dead.
You can run everything in docker and it is dead easy to install things as addons in hass.io
But the simple things like Mosquitto, Samba, ssh is actually so simple to setup that I prefer the good old way. That is my preference
One thing. As the previous maintainer of Motion and still a user with 7 cameras I say this!!
If you want to run Motion with out without Motioneye … then you need much more computer power than a Celeron.
I run Motion on a different computer with a fast 6 core i5. If you want to run many cameras at high speed and high resolution then you need CPU power to plow through every video frame looking at changed pixels after it has done a lot of filtering and identified and labelled and voted the areas of Motion. I know how much it takes. Especially when something moves in several of the cameras. For Motioneye running as an addon in HA with many cameras go for an i5 with 4 or more cores.
But just HA - a celeron at 2 GHz is more than enough. Even with 100+ devices and 1000s of event per day. A celeron is still much much faster than a Raspberry pi 4
A NUC with a Celeron or small i3 is great without Motion(eye)
A NUC running HA and Motion(eye) go for an i5 or i7
Why not just skip hass.io entirely, and just run the Home Assistant container on native docker on Ubuntu or something? That’s what I do with great success. I run mosquitto in a container (and a number of other fairly trivial containers, some home-built) just to make dependency management easier to deal with.
In particular, it’s really great to have all the Ubiquiti stuff isolated in a container since it’s got a bunch of stuff going on in there. LIkewise with Shinobi and influx and gafana – i just want to avoid dependency management hell.
As I’ve mentioned before, I have a fanless, SSD-based NUC-like clone that runs all this stuff. No moving parts, more reliable. Stuff that moves is stuff that breaks.
Woody4165 I have my HASSIO on the promox as well but I am new to the inner workings of the command line. How did you create the file.py, is there some place it needs to go? And how do you do the cron jobs in the tab? Just like you show it? I really need to get this temp and cpu information up and running since it will not let me pull it with the command cat /sys/class/thermal/thermal_zone_1/temp or 0 for that matter. Thanks
It’s all written in my post.
I created the py file in /root with nano and with the chmod command I made it executable.
You can create it also in /home or other folder, the important is that you then reference it in crontab
Then you edit crontab with -e option and add the the two lines and restart crontab with sudo service cron restart