HA Causing Permanent Drive Activity on Synology NAS VM (NOT Docker)

Hi all,

I’m running Home Assistant as a Virtual Machine on a Synology DS 220+. The two drives of the 220+ are actually just mirrored RAID0, and having got into HA after I bought the 220+, I know now that I probably would have purchased the 420+, so that I could be running HA on an SSD Storage Pool… NOT because of speed issues, but because constant/permanent spinning disk activity on the first RAID0 storage pool.

That said, I have seen the thread about using Docker on a Synology, and just re-pointing the Config folder to an external USB SSD… just as much drive activity, but loads quieter and less wear/tear on the spinning disks. Now, this isn’t at all possible in a normal/standard virtual machine setup. But I purchased a USB SSD, and have mounted it to the VM, vs. the Synology itself.

In theory, that USB drive is part of the HA VM, not the Synology, but I’m so much NOT the Linux guy, and wouldn’t know if it’s possible to get into the HA OS that I’m running and make that configuration change.

I am running a single VM, running Home Assistant OS 6.4 with Home Assistant Core 2021.10.06. Hopefully someone out there understands what I am trying to do and can help walk me through the act of changing the back-end config, so that my config folder is on that mounted USB SSD! (For that matter… hopefully tell me if it’s even possible!)

Thanks!

Let me recommend different concept.

I originally run RAID on my servers because I was taught that data must be backed up in case of failure and blah blah use raid. At some point I realize this is not good for my consumer use for reasons:

1.RAID mirror disk wear is even across both drive. If Disk1 fail, Mirror Disk1 is probably not far behind. Learned this the hard way.
2.My goal is usually to make copy of some important 1of kind data but not totally critical. I could stand to lose 1 month of movie/photo backup…in most case data not change month to month and change is small (2 movie that easily replace or photo that has 1yr backup in cloud so already redundant)
3.in 2 bay server would prefer put the APPS on disk1 and static data, like movie, on Disk2 for exact problem you working to prevent. Constant drive running likely fail sooner and dont want mixed use on disk
4.Raid not help when fire comes or electric shock which is more likely then HDD unrecoverable failure without any advance warning (I think synology SMART stats would warn you in time to replace/move data or drive performance make you notice)

In the end I decide:
no raid
static data on 1 disk no mirror or raid
db and app data on 2 disk no mirror or raid
external HDD or SDD rsync backup manual or automatic every 3 month or when add some critical data

This not answer your question but may be better option or easier option to implement
I buy multi bay server but even now skip raid and use incremental backup scheme with external backup to disk kept offsite/app setting and config backup to github. In this case you may even make backup over network to SDD attached to RasPi

Is the external USB SSD passed through to the HA OS VM?

If so you should be able to use the data move command or the UI to move the /data partition to it, assuming it is larger than the size of the currently used virtual disk. I have only tried this on a pi4, myself.

image

@tmjpugh, thank you for your answer. I’ll absolutely be considering it as I move through this, and if there’s not an easy solution.

@cogneato, thanks for this tidbit… I didn’t know this function was even there. That said, what am I doing wrong?? I have only the “Hardware” and “Import from USB” options here.

So, yes, in the Virtual Machine Manager in Synology, I have set it as USB passthrough for this USB SSD. In fact, I have a great indication that the VM “claimed it for it’s own”, because it disappeared from the Synology interface as an external device.

Before I set it as a passthrough to the VM, I had formatted it through the Synology interface, with the EXT4 option. Is there a different methodology that I should have followed? Should I not have formatted it before setting it as passthrough to Home Assistant?

What version of supervisor and host os do you have?
This pi4 is on:
supervisor-2021.10.0
Home Assistant OS 6.5

The drive will be formatted again during the process, so what you have should be fine.

core-2021.10.0
supervisor-2021.10.0
Home Assistant OS 6.4

I’m guessing that my not having the capability to “Move Data Disk” isn’t going to lie in the difference between OS 6.4 and 6.5, but let me know if you know differently.

I have a simillar VM setup to you except that I have a Synology DS918+ which is 3 years old with 2 x 6TB WD Red drives and 2 x 3TB WD Red’s migrated from my previous DS412+ which are now 8 years old. Each pair are SHR mirrored and have never flagged up any warnings or errors. Obviously I only use the older drives for less important stuff and I am curious how long they will actualy last.

I’m curious why you think that constant activity is a problem as this is how they are designed to work. I have many different packages installed and the one that seemed to hammer the disks the most was Surveillance Station for obvious reasons.

Personally I see value in having mirrored disks but I also take external backups on a regular basis using Hyperbackup and Snapshot Replication.

So, @cogneato, I updated to OS 6.5, and I then DO have the option to change data disks, but only to a non-existent DVD drive. I can only believe it’s some sort of ISO I may have used to start up in the VM in the first place or something. Now, today is the very first day I’ve been into the System area, checking out hardware. But I do show what I really believe is my USB SSD:

But, issue is that the device name (circled) isn’t an option to be my data disk.

Truly appreciate any assist if there’s one to be had.

@Jonah1970, I get it… in the end, I guess I know the drives are designed to work like they are. Call me skiddish, I guess… I’ve had a Synology disk station since 2014 (DS414j), and the think was clunky, but very reliable… I’m literally just not used to seeing constant flashing and drive noise at all… and it is constant while HA is running.

I have IronWolf Pro NAS drives in the DS220+, so yes, they’re built for work… but when I bought them, I noticed that people’s reviews were along the lines of, “reliable, but really noisy”. So… I have this unit in my utility room, and when there’s no A/C or Heat happening, I can hear the think chugging away perfectly, while sitting in my living room above it. Just annoying is all.

When it comes right down to it, I will very likely pitch the money out for a DS420+, and then just go two and two, SSD/Spinning, then permanently load the whole HA VM onto SSD for the speed and silence of it… but that’s $600 away, and likely not going to happen very quickly with holiday expenses coming up.

You asked… no judgement please… I just wanted to candidly answer.

No problem, I understand where you are coming from.

Bear in mind that all common computer systems use virtual memory to cache data from RAM to a hard disk or SSD. This is just the way modern systems work.

One thing that can cause more caching (or disk activity) is the amount of RAM so how much RAM do you have in your DS220+? I believe that the requirements for running HA in Synology VMM are 2MB minimum, 3MB recomended, 4MB ideal. The DS220+ has 2GB standard which is expandable to 6GB, I currently have 8GB in my DS918+ with 3GB allocated to the HA VM.

The other thing to note is that DSM has some really good disgnostic displays so you could monitor your disk utilisation using the Resource Monitor package. I have included mine below in case you wanted to compare:

@Jonah1970, well, first off, thanks for the “duh” moment you provided me; I’m an IT guy of 20 years, and didn’t even think to go back to hardware recommendations. I started in Home Assistant only about 2 months ago with the Docker image, and quickly realized some limitations there. So I migrated to the full OVA virtual machine. But somewhere, way back when I was even contemplating the Docker image, I read “2GB should be fine”. So there’s the “duh” moment you gave me… I absolutely should have checked that first. Now, I will say, I went in immediately after reading your post and doubled from two to four GB. It didn’t much help the drive activity, but rather, something that I wasn’t even posting about… sensor reaction time. I was feeling that I was getting slower sensor reaction, but thought of that as another problem to tackle later. As it is, I noticed immediate improvement there after the memory boost.

So… I actually went above and beyond the “official” RAM limit from Synology, and have very clear indication that the DS fully recognizes and uses the additional 16GB I fed it… so I have 18GB total… not hurting at all for RAM. Drive activity seems to have not calmed down, but I wonder about the size of my implementation too. With your top graph PEAKING at 20%, can I ask how many devices/entities your instance supports? I’ve provided about one hour of mine below, and you can see I’m well over your average. But I don’t know what’s “normal” for number of devices… I went headlong into HA when I found it, and already have 79 devices / ~270 entities, including three camera streams. So… I’m curious if what I’m seeing is absolutely what’s expected, and I’m just making too big of deal out of it.

Thanks!

Sure no problem, I have 90 devices 610 entities. I also have have a bunch of packages installed on the Synology including Docker with 4 active containers, I did have many more but moved the HA related ones to Supervisor.

I am thinking that your disk activity might be down to the cameras as mine was higher when I had Surveillance Station installed. Why don’t you shut them down or unplug them to see what happens to the disk activity.

Also have a look at Task Manager in Resource Monitor as this should give a clue as to what is causing the disk activity, my HA VMM seems to be minimal.