I am more concerned with the number of writes, causing wear on the SD card, rather than the speed of them.
OK, as a comparison, a set of results taken before/after starting up HA. I’m getting 15-18 writes per second consistently in steady-state after a big spike during start-up. The steady-state rate is due to a raft of smart switches sending meter readings every 10 seconds. This creates a ton of events all being logged to the ha db.
[email protected]:/home/homeassistant/.homeassistant $ iostat -d -x 60 Linux 4.9.35-v7+ (hassbian) 10/07/17 _armv7l_ (4 CPU) rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util HA not running 0.00 0.15 0.00 0.20 0.00 1.40 14.00 0.00 0.00 0.00 0.00 0.00 0.00 HA Start-up 0.00 114.13 0.28 134.05 3.80 1336.13 19.95 0.57 4.26 9.41 4.25 1.27 17.12 HA Steady-state 0.00 14.57 0.00 18.77 0.00 187.33 19.96 0.09 4.94 0.00 4.94 1.21 2.27
Repeat of above but with History/Logbook/Recorder/Logger turned-off:
rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util HA not running 0.00 0.15 0.00 0.20 0.00 1.40 14.00 0.00 0.00 0.00 0.00 0.00 0.00 HA Start-up 0.00 1.00 0.00 1.02 0.00 10.07 19.80 0.02 18.20 0.00 18.20 4.59 0.47 HA Steady-state 0.00 0.33 0.00 0.60 0.00 6.33 21.11 0.00 1.39 0.00 1.39 0.83 0.05
So in a normal steady state system, your machine is writing 187 kBytes/s, whereas mine is 40 kBytes/s. Considering I also run influxdb on my Pi, I find this a little strange.
Do you experience any SDCard failures?
I suspect we have different message profiles. Currently I have a bunch of devices sending meter readings at a high rate into HA with full event logging. Each reading is around 5-6 messages (current, voltage, power, energy, …) and these are all adding up to a ton of database writes. Whereas you may (?) have a less “chatty” network albeit with two sets of writes (HA and InfluxDb). I have Influx running on a separate server so the RPi isn’t having to carry that load as well. So the numbers could be right.
For me, SD card issues have generally had more to do with unexpected power outages leaving the sd card in a corrupt state rather than writes wearing out the card. I had a power outage recently that messed things up and decided to move to an RPi3 with a USB flash drive mounted instead. Not sure it will make a difference but will see. Best is a sound backup strategy of course. 8)
You gotta love the global community. I re-read that sentence and did smile at how it could be misinterpreted! Of course, “sound” is Irish for “robust”. In my case, backup strategy is an emergency crate of Guinness to help me mourn the lost data!
I live Guiness and Ireland! By the “sound backup” I imagined my old Atari or ZX Spectrum connected to tape recorder and making backups. Yep and now it’s the time to open my Budweiser Budvar, cheers from Czech!
I’ve seen similar problems, already two cards burned by Hass in about 1.5 year.
From what I’ve seen, the root cause is not the write speed of the SD card, but rather the use of sqlite as a database. Sqlite does a filesystem flush after every change to the database, which not only alters the blocks of the database on disk, but also the filesystem metadata of the sqlite database. This prevents the Linux kernel from grouping and optimizing disk writes, but on the other hand it makes sure the on-disk database will always be consistent and up-to-date.
Now the weak point of flash media is not in the write cycles, but in the erase cycles: flash blocks wear from erase cycles. Flash media controllers try to even out the wear by shuffling logical blocks around, but there’s only so little wear leveling an SD card can do. Given the amount of writes (and syncs) you get with a Hass system with many events, you can imagine the amount of wear the SD card gets is beyond the normal casual use the SD card vendors expect.
I have been thinking how to solve this for quite some time, and I’ve come up with some possible directions for solution. Here they are in random order:
- Create a tmpfs to put the sqlite database on it. Works great, but the database will not be persistent over crashes or reboots.
- As 1, and have a cron job copy the database back to disk every an hour. At boot, copy the database from disk to tmpfs, start hass, and off you go.
- Put the sqlite database on NFS storage backed by something that’s not flash, or by a lying NFS server telling the client “yes, I have synced your blocks to disk”, while at the same time finding a good time to actually group that bunch of writes to disk.
- Tell Hass to use an in-memory Sqlite database. Same as option 1, less fiddling, but not evern constistent over Hass restarts.
Configure like this:
recorder: db_url: sqlite://
So far the easy options that don’t require any changes to hass.
A solution that needs coding is to tell Hass to put the sqlite database in memory (a RPi 3 has sufficient memory for that), and have Hass backup the database to disk once every hour (or on whatever configurable time interval). This is very easy to do in C, the Sqlite documentation even has example code. Unfortunately I haven’t found a way in SQLAlchemy to get to the underlying Sqlite connection, and even if I got, the pysqlite library doesn’t support the backup functions the C library does.
Would switching the database to MySQL reduce the number of erase cycles?
A good way to go might be to just have the DB get written to disk when doing a reboot.
Though you’ll have to be careful because a HASS database get can get hundreds of MB in size pretty easily.
I don’t know if switching to MySQL or MariaDB reduces the number of erase cycles. I think it does because in my experience MySQL can be tuned to reduce the number of disk updates with the disadvantage of a corrupted database at a crash. Would be worth trying.
Another possibility would be to use a MySQL database on a remote host.
After burning my first SD card within a few weeks of running hass I installed a 1,8" USB hdd to my rpi3. Now I use the sdcard only for the boot partition. USB boot did not work for me. Running everything from HDD feels faster than my SD card.
Today I would just by a cheap USB SSD.
ok You gotta let me know how you’re doing that. I have a 4tb NAS with GigE and various SoCs including Odroid C2+, a couple PCDuino3 Nanos, An Odroid HC-1, and a couple PIs. I would love to just have them grab their image from the NAS. Right now I can’t even get hass to run from NAS.
Anyone had luck with mlc based nand cards? I’ve just ordered an industrial Kingston card in the hope it’s a bit more robust.
These are my very old notes. At least 3+ years old but I think my Pi is running the latest OS so it should sitll work. Good luck NFS mounting
Copy the root partition from the SD card to my NFS server: “tar -cf - --one-file-system -C / . | ssh server tar -xpf - -C /foo/rpi”
From: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait
To: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs ip=dhcp nfsroot=192.168.1.11:/volume1/bb-rpi/root,vers=3 rw elevator=deadline rootwait
Comment out the eth0 lines in /etc/network/interfaces
comment out the root (/) mount in /etc/fstab
Anybody tried booting completely off USB? https://github.com/raspberrypi/documentation/blob/master/hardware/raspberrypi/bootmodes/msd.md
This is only possible with rpi3, but if that works I would consider upgrading my pies
My pi’s are fully running on a SSD connected to USB. No SD-card what so ever.
I’m running off a 32GB USB drive, on my RaspberryPI 3 with no SD card installed. I followed the instructions here:
Raspberry Pi system is quite fragile… So, I, persnally, constantly needed a backup. But now, Ihave solved the issue)) This backup has really helped How to Backup up and Restore your Raspberry Pi SD Card