Migrating from HAOS on a Pi to HAOS in a Synology VM

I’ve seen lots of threads on various types of migrations but I didn’t find anything in detail on doing this. As many do, I consulted with chatgpt. And if you do this often, you will figure out that it can be confidently completely wrong :).
So, I’m hoping there is some sort of guide for doing this. Is there? If so, I’d appreciate a pointer to it.
Anyways, here’s basically what chatgpt suggested:

-Run HAOS in a synology VM using their Virtual Machine Manager package
-Use the Pi just for z-wave. Just run zwave-js-ui. Use the zwave-js integration on the synology instance of HA to connect to the Pi over the network. Manage zwave in the Pi.
-Run all of the automations, add-ons, node red, etc on the synology

It sounds like a reasonable approach. But as I said, chatgpt can sound very right when it is just plain wrong.

Have you done this before? If so, did it work?

Running in a VM on Synology works fine. Unless you want to run zwave on the Pi for some reason (like where it is located within your house) there is no need to do that. Just plug your zwave adapter in on the NAS and pass it into the VM when setting up the VM. In any case, make sure you have sufficient memory in your NAS to be able to create a sufficiently sized VM. (Think I run 2gb)

Thanks so much for the response. Yeah, I do like the idea of keeping the zwave where it is just because all of the routes are based on where it is now. I suppose I could reroute everything but I read that sometimes this can create suboptimal routes because everything is kind of being tossed up into the air. This may not actually be the case though. And, the Pi is more centrally located in my house than my NAS.
Yes, memory will be an issue. I only have 4GB now. My plan initially was to just just get the VM going without restoring from backup, just to prove the concept. If that works, then I would probably buy more memory for the NAS.
The issue I’m encountering now is that I can’t, for the life of me, figure out how to create a VM in VMM that uses the qcow2 file I got from GitHub. I must be missing something very basic. Perhaps by the time you or someone else reads this, I will have figured it out :slight_smile:

Ok, I figured it out. I needed to import the disk image rather than create a new VM. It’s been a while since I messed with VMs and I’ve never not used an ISO. Now I have HAOS running in the VM. Onwards and upwards :slight_smile:

I should probably mention that the reason why I’m migrating is that the pi4 is running out of steam. It would still probably be ok if I weren’t running influxdb and grafana. I posted another topic about zwave devices randomly appearing dead when I restart HA. Now I’m wondering if my performance issues are the cause.

Ok, I got things working with a VM on the Synology with z-wave JS accessing the Pi over the network. I can see all of my z-wave devices. Pretty cool, actually :). As I said in my other post, I like this separation and would plan on running things this way.

I now have a couple of choices:

  1. Proceed as planned with the VM on the Synology. For this to work, I’m thinking I would need to buy an 8GB memory module for the Synology. I’d like to give HA 4GB just to make sure it has plenty of memory.
  2. Buy a mini PC and run HAOS there.

I like the idea of running on Synology as I already have it and it’s relatively lightly loaded. It’s mostly a media server with a bunch of ripped 4K blu-rays. While streaming a 4K movie, CPU is at 5% and read rate from the disks is about 8 MB/sec as is network load. It seems to me that the only thing that really matters here in terms of HA performance is the CPU load.

It sounds like you’re running HA your Synology. Which model do you have? How long have you be doing this? Has it worked well for you? Have you found any major downsides to not using dedicated hardware?

How worried do you think I should be about HA interfering with NAS behavior? Based on what I read, the VM should be well-isolated from everything else. But, obviously, I want my NAS to work well.

As for cost, a mini PC should be quite a bit more expensive than an 8GB memory module, which favors the VM approach.

Thoughts?

I have a 2 bay DS720+ (I actually have 2 of them - more below). I added an 8GB memory stick so have 10G total. I’ve been running this configuration for probably 3 or 4 years with zero issues. What I find particularly nice about this is that it gives me a single “hardened” system for my most critical stuff. I run my PCs “light” with all permanent file storage on the NAS and the NAS (along with my router) are on a UPS so they survive anything short of a serious long term power outage and even then at least shut down cleanly. I have seen no performance issues with all my file accesses and HA running on the NAS.

Now actually I have 2 setups like this because we have a seasonal house as well. It has an identical configuration running file services and HA for the seasonal home so my actual experience with this set up is effectively more like 6 or 7 years. Not relevant to your questions but I actually run each NAS in a RAID configuration and the 2 NAS duplicate each other so I have many copies of my data (did I mention I tend toward paranoia? :smile: )

So my bottom line is that I actually prefer this set up to dedicated hardware for HA because I can invest in one place for redundancy and hardening. I consider HA as only slightly less mission critical to me than my data storage.

Happy to share any other info if I haven’t answered all your questions.

While HAOS runs on Syno fine, if you plan to have a large enviro, then HA will run really slow at some point. No amount of RAM can fix it. The Syno CPU is just too slow once you get to a certain point. I was not even running Z2M or ZWJS or any actual ‘apps’ (no plex server, etc) – and HA did not have anything but a basic dashboard at that time. The RAM and CPU load was always around 80 and it was s-l-o-w on my 1019+ (intel CPU).

That was when I had about 120 automations and 200 devices (not entities, devices). I’m now at 185 automation with only a few more devices and I cant imagine how slow the Syno would be.

I moved to a Win11 system (AMD low-power CPU, 16gb ram - nothing fancy) I already had running Z2M, ZwaveJS, Plex Server, and some other stuff… added HAOS VM on HyperV… it is 10x faster, at least. I develop blueprints and restart things a lot, its easy for me to know the speed of the system just by how quickly the OS reboots or it reloads the YAML.

Of course, how HAOS runs on a synology will depend a lot on which model it is. I have the DS1821+ which has a much more powerful CPU than yours. I also don’t have nearly that many devices. And, zwave-js-ui is still running on my Pi but I’m not sure how much CPU that takes.

I gave the HAOS VM 3GB of memory. With 2GB I noticed it was doing a lot of swapping. As of now, it’s all running great. And mounting shares on the NAS is a little zippier with the extra memory.

Minor update…
It looks like HA runs better with 4GB of memory. After a few days, the 3GB VM showed some swapping and seemed a little sluggish. 4GB seems better, zippier.

i have to agree on this one. I had it running on 920+ (24GB mem, 6 assinged to VM) and indeed the cpu showed to be the limiter in the end. Automations could just stall for seconds sometimes. This behavior got worse when i added more and more addons ofcourse.
Also i could crash HA when copying a lot of files on the NAS from one volume to another (where the VM disk image resided). So also take care of seperating storage location where you can.
In the end a HA core restart was taking minutes.

If you dont plan to use a lot of integration/addons, you’ll be fine. If you gonna grow, you might consider a separate piece of hardware.
A HA core restart can then easily be done in 20 seconds.

Interesting. I’ll keep this in mind.

I don’t think I’ll be expanding HA much beyond what I have now. But who knows. I tend to come up with more automation ideas regularly :).

For sure, the CPU that my DS1821+ has is significantly more powerful than your DS920+. If it starts struggling, I will buy a mini PC or something similar. However, I like having the fault tolerance that the NAS gives me (disk failure). And, I like that HA is running in a VM so I can snapshot it and even remove it and recreate it if I need to. If I used a mini PC, I’d probably run a VM as well.

As for storage, I have two storage pools. One is for all our media (mostly lossless 4K movies). The other is typical storage. Even when streaming a 4K movie, the CPU was lightly loaded.

I recommend this. I started running VMs on my Synology DS920+. I eventually moved everything off of the Synology to Proxmox. Let the NAS be a NAS.

I do HA backups to the NAS. I do Proxmox backups to the NAS. I backup the NAS to a NAS attached USB drive. Also, backup the NAS to a USB drive attached to my main PC.

In the end, I have my Synology lightly loaded, two PCs running Proxmox lightly loaded, and a mini PC running Proxmox lightly loaded. This gives plenty of room to bring VMs/LXCs up in other places (probably still lightly loaded) if needed.

And, they are all fun to play with :slight_smile:

Just an update in case it helps someone…
The arrangement I have now with the pi4 basically just running z-wave JS UI and the bulk of HA running in a VM on my NAS is working very well. The setup is very stable and zippy. I do need to monitor the z-wave instance to be sure HA is up to date, but that’s fine with me. I really do like that the low-level z-wave stuff is all handled by the Pi.