Storage...what are you guys using for your home networks?

Hey guys…my current setup includes 2 Ubuntu servers. One is a small Gigabyte mini with about 360GB or so and it runs HA (18.04) and the other is a Gigabyte desktop tower with a 500GB internal drive and two external USB Costco Seagate Specials…one is 8TB and the other is 4TB.

I hate the external drives…they’re just so damn slow, not to mention they make me nervous…no RAID, and, in my opinion, they’re really not connected to the system (I don’t consider USB to be a viable storage pathway compared to newer technologies.)

I’d like to build myself a NAS and I’m looking at the differences between QNAP, Synology, and just running a FreeNAS box.

Network is 1000BaseT (10G is just impractical right now.)

One of my Ubuntu boxes is a LAMP server (public) and I host a few websites on it.

I have a few DL380 G7 servers sitting around, but 1.2TB 10K SAS drives are outrageously expensive, so I think that’s out of the running for any future use.

Let’s say you were starting from scratch with no storage solution other than your internal drive on your PC or Mac.

What direction would you go to beef up your storage so you could have somewhere around 20 - 30 TB of storage with the option of being redundant (Probably raid 1+0).

Synology? QNAP? Build-your-own FreeNAS?

Thanks for your thoughts.

I’d suggest you’re better off asking on a data hoarder/homelab forum/Discord.

Personally, I’d go TrueNAS (previously FreeNAS) with ZFS.

1 Like

Have you looked at xpenology project? I’m running one on bare metal (old PC) and works qite well. For some time I was running HA in docker on this machine (though migrated to ESXi), but I’m still keeping MariaDB for HA on this NAS. SO might be quite useful for HA as such.

Really depends on what are you using the storage for, number of people using it, and how much storage you use now and will you use in the next few years.

So you ask two questions: what are we using currently? And what would we start with?

Currently I really only have about 200GB of data I really care about. That a loss would be a major problem. Things like my documents, pictures, website, related databases, mailboxes, encryption keys, passwords, disaster recovery info of one system, etc. Almost all my data access is via redirected folders (ie. My Documents, My Pictures) that point to my file server and are cached using Offline Files. So easy to back everything up from that single server. Every Saturday a scheduled job runs and backs all this important data up to LTO2 tape. I have a tape rotation of 10 backups. After the backup finishes, that tape goes off site to location 1. The tape from location 1 then moves to location 2. The tape from location 2 then moves to tape case on top of the server. And so the round-robin continues as it has for over 20 years (I just upgrade tapes and drive as necessary).

20-30TB of drives is easy. Any decent NAS can do it, via a RAID level you find appropriate like RAID 5 or RAID 10. I tend to prefer Netgear or Buffalo. 4 bay unit with 3 enterprise grade 10TB (IronWolf of similar) drives in RAID5, leaving 1 bay free for expansion. Obviously long rebuild times on RAID5. RAID10 needs more drives (4).

Redundant != backup. Really at that volume and in the consumer space, your only option for off site (unless you do “cloud” like Amazon Glacier…then you have lost control of your data) are USB HDs. 10+ TB will require one of those USB enclosures with 2 more more drives as a volume set. Then you can store that off site, and perform differential backups using smaller USB HDs. Speed shouldn’t matter as this can happen over night.

You can also encrypt that cloud backup - that’s exactly what I do with rclone. The data is encrypted before it leaves my network.

1 Like

Good info. I have been using Syncthing to ensure there’s a copy of my data offloaded to another location (office) just in case the house burns down as I agree that redundancy isn’t backup (depends on architecture I suppose.)

There is of course data that I want to preserve forever, and data that needs to be preserved in order to get back up and running quickly if needed. For the archives, I duplicate to another off-site system but I also backup to Glacier (although I’d prefer to be in complete control of it.) For everything else, I just sync to that off-site system.

Truth. I have typically been more concerned about the availability of the data before the security of the data, so that’s a good point.

I’m on the FreeNAS route since around 8 years now and never missed anything.
I recently added encrypted cloud backups using the FreeNAS Duplicati plugin. I really like it’s smart retention policy.

The default policy is to tstore one backup for each of the last 7 days each of the last 4 weeks and each of the last 12 months.

I’m only chiming in to whinge. I use a NAS for local backup and Zoolz for cloud. I paid for a lifetime 500G+500G Zoolz account and last year they tried to get these accounts to pay a monthly fee (IFTTT anybody?). They got a big backlash from that, so now instead they’re shutting down the Australian server and telling everybody to create a new account with a discount, but no lifetime. It will be interesting to see how they respond to my query about transferring my lifetime account.

I use unraid. Built an AMD Ryzen 3900x based server and works very well.
Unraid also makes running virtual machines accessible to most people whilst passing through the available hardware.
It’s not free, but worth a look.
There is 1 month trial period

I use OpenMediaVault and I’m satisfied with it, very easy to deploy and very versatile.

All Ubiquity Unifi USG Router, POE Switch, Access Points

QNAP NAS mirrored, synced 24/7 with Google Drive and backup daily to external HDD which get’s swapped out once a month with the one in my safe deposit box.

XigmaNAS(originally FreenNAS then Nas4Free) user here. ZFS storage no VMs but using NextOwnCloud & Syncthing to sync content. It’s also my SSH server for access from outside with local port redirects enabled.
I think this uses marginally less resources than TrueNAS does now so works better on older hardware.
It’s a Supermicro board with an old Xeon CPU E3-1230 V2 and 32GB ECC RAM. All spare parts or cheap eBay purchases.

I would defo use one of those DL380s and just use SATA drives instead. I have a SASS card hooked up to mine, in IT mode for ZFS and works like a charm. I have a combo of 3TB drives and 6TB drives in RAIDz2 config(like Raid5). Recently opted for Toshiba drives instead of WDReds.

To get 20TB in Raid10 you’ll be loosing 20TB to parity no? Too high a price for me. I think Unraid offers the lowest partity cost for redundancy. Also offers a lot of flexibility too, I’m thinking of going that route but the price of it puts me off.

6 years ago I build a custom storage/db server, fairly high end specs for those uses, but expecting that at some point I would use it for more. It was running Linux Mint based on Ubuntu 14, then in 2019 I went to Ubuntu 18.04 and redid the OS filesystem with a new drive for much faster DB access. On Mint I had to do a lot of hacking to get things the way I wanted, with 18.04 it was almost plug and play.

It has been 100% rock solid the entire time, using an Adaptec hardware RAID controller, dual Intel NIC, and 6 4TB WD Red Pro drives in RAID6 giving me 14TB of storage, in 2019 I added 3 1TB SSDs in RAID5 (Intel chipset) for 2TB of high speed storage, upgraded the RAM and all the device firmware before it got the new OS. Using the dual NIC properly allows all the SMB/NFS file access to be done using one of them, and all management and service access to be done on the other, this means service reliability/latency is not degraded during a wire speed file transfer.

The hardware was all workstation or high end desktop, and great care was taken during assembly, in the past I build a lot of high end workstation and gaming systems for people, so I knew when needed to be done to keep the thing running 24/7 in a hot furnace room with 0 maintenance. The most effort I spent was on cooling the RAID controller CPU, which would generally run at 90C, I got it down to 37 with some custom lightweight air cooling, the drives all run around 35c with some low flow cooling, keeping them at optimal bearing temp.

I would not hesitate to do the exact same thing if I was doing it from scratch, knowing how well it turned out. In its current state it is a quad core Haswell Xeon cpu (turbo and HT disabled), with 32GB of RAM, runs numerous services. I would probably go with an 8-core AMD cpu, same amount of RAM, newer RAID controller (old one is now unsupported), and 8 10TB drives in RAID6, which would give me around 43TB of usable space.

In 2019 I also installed a 27U rack, any new system would be rack mountable, I was actually planning to move all the hardware out of the current case and onto the rack, but could not find an empty case that had good enough backplane for the drives at a reasonable price, so now the server is the only thing not on the rack.

Currently running
Ubuntu 18.04 with 20.04 kernel
Docker: HA, Rhasspy, BlueCherry with motion detection
Databases: Prometheus, MongoDB, MySQL
DNS, Filtered DNS, NTP, NUT, Duplicati, Mosquitto
Grafana, Netdata
Unifi controller, Adaptec RAID controller
Samba and NFS file servers

I’m using OpenZFS on Ubuntu and it works well.