Hi all,
Just wanted some advice.
After using domoticz for several years I started using Home Assistant and was stunned by the possibilities and the way it looked. I remember it wasn’t very stable and crashed now and then.
I switched to Openhab and noticed it z-wave protocol had a longer range as unreachable devices where not dead anymore.
The only thing with Openhab is… it’s so damn difficult and hard to learn. I’m not an IT pro I just want things to work and do some very basic scripting etc.
So my question is: In the last 1,5 years, did Home assistant change a lot? Is the z-wave protocol just as good as the one Openhab uses?
Is there someone with the same switching from one system to the other just as I did?
AFAIK this has nothing at all to do with HA or Openhab, this is completely down to the Z-Wave stick you use. The range it has is up to the radio on the stick itself and not the computer/device reading data from it. So the range should be the same if it’s in exactly the same same location.
Absolutely. It’s like a different product in many ways. However you said your initial problem was crashing, and that’s a problem I’ve never had. I’ve been using HA for 2 years now (Sept of `18) and never had a single crash of the core system.
In the early days I used a RasPi and didnt do any recorder filtering so I had 2 maybe 3 SD cards die and the whole system go down, but since moving to a proper machine to host HA, it’s been perfectly stable for over 1.5 years now (stable in terms of HA-core. There’s still breaking changes, custom components that can break, etc).
The release cycle is still every 3(ish) weeks though, so expect to have to read the change log, make needed changes per the breaking changes, and update the system once a month or so. If you fall behind on updates it can be hard to catch up since you have multiple breaking changes that stack up that you’ll need to account for.
AFAIK this has nothing at all to do with HA or Openhab, this is completely down to the Z-Wave stick you use. The range it has is up to the radio on the stick itself and not the computer/device reading data from it. So the range should be the same if it’s in exactly the same same location.
Well I thought Openhab was using open-zwave and HomeA another protocol.
I read some posts I did june 2019 and remember I had problems with z-wave devices from fibaro. I think that was the thing that made me switch.
So if I understand you correct you are no longer running HA on a Raspi? If so what hardware are you running? Are you also using Fibaro products?
HA Uses Open-ZWave as well. The current built-in version is OZW 1.4, however the move to OZW 1.6 is in progress (but can be used as a beta at the moment). The OZW 1.6 integration (from what I understand) will utilize a separate addon for this, and it will talk back to HA over MQTT. This is so restarts of HA don’t bring down the Z-Wave network and should mean less downtime overall.
However none of that effects the range. It will effect product compatibility, etc, but the range is still 100% down to the radio/amplifier/firmware on the Z-Wave stick.
Correct. I currently run HA as a VM on a super overpowered server (R510 w/ 2x E5-2680v2), but only because I’ve been doing some hardware reorganization.
I usually have it running on one of my Intel NUCs. I run it as a VM in ESXi to keep things simple and easily portable.
You can pick up a used NUC on ebay for pretty cheap if you shop around. I just purchased a used Optiplex 9010 on eBay for someone else to use for HA. Pretty much any cheap used computer or server will do fine, and small form factor computers are nice for obvious space reasons.
You can also run it on Proxmox, FreeNAS, or unRAID if you have a NAS available.
You can also still run it on a Raspi, but be sure to make sure your database/recorder settings are optimized to not have excessive wear on the SD card leading to early death. The Raspi is ok if you don’t want to do things such as run image processing software like face/person/object detection or other computationally heavy tasks that may bog it down. I hear some people do this just fine on the Pi4, but I just like the idea of a real computer doing those things.
I don’t have any fabaro devices so I can’t comment on any current or past issues with them. My Z-Wave network is pretty light with only a few Inovelli wall switches. I have a lot of Xiaomi devices (mostly their PIR motion sensors) on Zigbee. Most of my devices are WiFi though, flashed with ESPHome. (wall switches, outlet sockets, sensors, power monitoring, etc)
Running HA on an rpi4 (docker) with an external ssd drive, works perfectly. I used to run it on an rpi3 but I ran into memory limits.
To be honest I don’t see the need at the moment to upgrade, could you explain what the benefits would be upgrading to a NUC other than having internal memory.
Speed for one, I don’t know how large your HA install is, but the more entities you’ll get the slower your pi will become in running HA. I have over 1200 entities and I really prefer it to be running fast.
There is another reason why I would not recommend a pi, simply because it isn’t really flexible in terms of upscaling. (What if you want to run Plex and a ton of other plugins?).
I run Unraid on an i5 4690 with 32GB of RAM and have around 40 containers running. Processor usage is around 11% so it leaves me plenty for additional VM’s but will also allow me to transcode 1080p streams without hiccups. (4K is different unfortunately and will still require an even more beefy processor or a GPU that can do transcoding for you).
In most cases, most people will not need this. But having a system ready for expansion is never a bad idea imho. Plus you can have the ssd internally instead of a USB external one (which could save you from accidental stuff like pulling the cable). And then ofc performance, my HA restarts in under 5 seconds with all the components and such running within 15 seconds. I am betting that you won’t get this kind of performance with an rpi.
Honestly, people should do whatever they are fine with, in my view though a rpi is a hobby product whereas a NUC is more of a permanent (more professional) solution. If you have not tried other hardware than a pi to run HA then don’t, if you don’t know what you are missing it is best to not know until you do have the hardware. Why? Because it is hard to go back.
See it as follows: HA is the heart and brains of your Home Automation (like an engine would for a ferrari). Now imagine having a ferrari, but only running a 1.1 toyota engine or a Ryzen 3950x with a GTX 1050). To me the heart and brains are the most important piece of the setup (as without it, it simply doesnt work). I see people on this forum spending thousands of euros on smart stuff yet they use the cheapest form of system available.
Why people would choose a NUC is because of the low power usage (like a rpi does) but with the performance close to a laptop. Building your own server is another option if you dont care about power usage that much (which is what I did).
It’s far more polished and proficient than it was 18mths ago. I can’t remember the last time I had a crash of the system, an update fail or something not work other than due to user error/wrong config. My system has 99.9% uptime.
They are making solid improvements to the backend with each new release, especially the past 2-3 releases which have been focused heavily on reducing boot and load times, background tasks and listeners using resources, etc. HA is becoming very snappy.
I would recommend as @Silicon_Avatar has suggested and get an old workstation machine like a Dell Optiplex, I use one myself and it runs Proxmox with 3 VMs including HA and a Plex server, plus a number of other things. I have about $200 in the machine total, so very pocket friendly. If you have a larger budget, an i5 NUC is a great option.
I don’t use Z-wave in my house, but from reading the forums I believe a new way of running Z-wave has been recently implemented and many seem to be having success with it. Again, I can’t comment on it’s reliability but range would be determined by your network, not HA.
I’m going to just chime in to agree with using something other than the rpi if you plan on using it long term. I’m currently running on a NUC as well with the HassOS image in a Proxmox VM.
Besides the stability without having to move the database, the speed improvements are very nice as well.
Hijacking this thread, but - in my opinion - with a relevant question:
when sharing the available resources of the machine with other services (like Plex etc.), isn’t there a realistic chance of these services affecting the performance of HA?
Let’s say Plex is transcoding 2 4k streams. That’ll be putting a lot of load on the system. Load, that won’t be available to HA. So there’s a chance, that HAs performance will suffer from this. Or if it’s a backup machine and the weekly backup from another machine is transferred. That would put some load on the harddrive, and probably max out what the network interface is capable of. Which I believe also has the chance of introducing latency in HAs operation.
I’m not that deep into Proxmox, ESXi etc., so I don’t know about the possibilities to set hard limits / priorities to which resources an instance of a service can have access to. I assume it’s possible. So when building such a system I guess such scenarios should be thought of.
For this reason I still have my HA running on a separate Pi (which in my not too big setup is performant enough), and everything else (Plex, Zoneminder, but directly on Ubuntu, no ESXi or Proxmox) on the more powerful NUC. That way it is guaranteed, that my automations won’t be affected by Plex eating up the CPU while Zoneminder is watching 6 Cameras and some other machine is doing its backup.
Essentially it boils down to HA being the component that should operate as snappy as possible (in relation to its hardware capabilities). For me it’s ok if lights always turn on with a 500ms delay. But it would be weird if they sometimes take 100ms, but other times 1000ms because the machine is doing a lot of work. A high volatility in response time feel like the system is broken / unreliable.
Any thought’s on this perspective? Am I too concerned about this and in practice it won’t be noticeable?
I personally don’t see any impact on HA performance running HA Container on a NUC along with a bunch of other stuff.
I have 27 total containers running on my NUC i3 including 4 instances of HA (1 Supervised and 3 Container), 2 different MQTT brokers, influxdb, mariadb, grafana, appdaemon, qt-openzwave, tasmoadmin, zigbe2mqtt, letsencrypt, synchthing, open-vpn and ESPHome.
and I’ve got a Kodi media server running directly on the OS as well.
My CPU load usually hangs out around 10-20% with very occasional spikes up around 45-50%.
This is certainly a valid concern. With enough stuff running, it’s always possible to overload the system and slow everything down.
I don’t run a media server, so that’s not an issue for me. I currently just have two HA instances with a handful of addons, including zwave and motioneye, and a couple other lightweight VMs for various other things. My NUC is several years old and my CPU is generally around 10% unless I’m watching the camera stream, then it jumps up closer to 20%.
The NUC is just so much faster that you can afford to throw some extra stuff on there and still have it seem to outperform the pi. I’d say it’s safe to start with everything on the NUC and reevaluate if/when performance starts to become an issue. If you have enough stuff on there to noticeably impact HA, then the pi was never going to be enough anyway and you still would’ve needed the NUC or similar.