Hello there,
this is a alert post more than anything else.
I’ve been using Home Assistant for almost 4 years now if I’m not mistaken and it’s been a love/hate journey.
Today, I need to tell something important to anyone who uses a MicroSD Home Assistant setup!
It’s now the fourth time Home Assistant Recorder has completely fried my drive. Yup, you read that right. It’s now garbage and needs to be replaced.
Beginning my journey with Home Assistant, I used a Raspberry Pi 3 and a class 10 MicroSD, however 8 months later it wrote itself to death. I wasn’t able to get anything back - total loss. No backup.
I purchased a new SanDisk MicroSD card and redid everything. It lasted one year. Fried. Backup has been made.
Again, I purchased a new SanDisk MicroSD and, similarly, it fried itself after some time. And lastly, the expensivest loss: A 120GB SSD in my NUC, which lasted about 2 years. Even with all precautions taken, such as excluding most entities. Home Assistant Recorder has fried all of those solid state devices to death. The SSD, however, still works somehow, but after trying to write on anything unmounts itself and spits out all sorts of errors, such as: -bash: /usr/bin/sudo: Input/output error to all commands.
EDIT: Please read post #49
I know, there is a blue disclaimer at the top of the Recorder documentation indicating that it might “reduce” the drives livespan, but it absolutely kills it.
If you are currently running Home Assistant on a MicroSD, switch the database to an external HDD drive or disable the database NOW, before it’s too late.
I’ve never had an SD card or SSD issue and won’t change using my MariaDB addon but I am enthusiastic about everyone having daily automated off server backups.
It depends on the load. I too had 2 micro SDs dying in the past 4 years. And those weren’t cheap ones. Since the last incident I have pointed the recorder at a different machine where MariaDB is using a regular HDD for the data. That setup has been pretty stable for a while now.
I’ve had HA running on a 3 year old Samsung 860 500GB SSD in an external enclosure for well over a year now (venv and docker both with recorder pointing to a MariaDB instance on the same Pi4-2GB along with NodeRed and Pihole) without skipping a beat. I even had recorder running against a local sqlite db for a bit and it’s still going strong.
MicroSD cards, yeah, for sure. But a good SSD (not even “quality”), no way I can see recorder kill it.
With that said… git backups every 15 minutes (super simple to setup) is a lifesaver.
I must’ve got pretty lucky then. 4 entirely different storages and 2 entirely different devices.
It has to do with the amount of entities. I currently have 5 pages FULL of things, all of them write constantly. Temperature sensors, Movement sensors, and, most importantly, 3 power meters that update all 2 seconds (24/7).
If you are writing that amount of data to a local sqlite instance, I can see it possibly killing a MicroSD card/crappy USB thumbdrive. But actually killing a SSD? I just don’t see how that’s possible. Even cheap SSDs can handle reading/writing upwards of 100GB/daily.
I don’t see how 5 pages of things can’t generate that much data in a single day to be able to actually kill a SSD drive. I have over 400 devices (a mixture of both physical and virtual) and well over 750 entities and on any given day, I generate MAYBE 5gb of data daily. TOPS. That’s with 7 instances of Glances, a Unifi integration, Sonarr, Radarr and Sabnzdb integrations all generating a ton of data into HA (all well known to create tons of entities) plus over 100 Zigbee devices (which each creates, on average 3-5 entities each).
More than likely, I’d go with what @Tinkerer stated and/or look at where you are housing your drive. Is it excessively hot? Do you have it in an enclosure with no thermal protection? Heat will kill a SSD faster than any data read/writes will.
Beyond that, do you need EVERY reading sensor to be recorded? I only record my power, temperature, and battery settings to DB and InfluxDB. The rest of it is merely noise that I’ll never look at.
I think that such posts (or stuff like “why is HA so bad for me lately?”, “are there any other HA unsatisfied users?”, “is there any good home automation software beside HA”, etc., etc.) should be locked by the moderators from the beginning and not allowed to develop (even if OP had fair intentions to begin with).
The present issue simply confuses users as it is not based on any facts except OP’s claims (that contradicts logic and most users’ experience):
a MicroSD with HA (or any other system that makes use of a fairly large database) is a time bomb due to large number of I/Os (in fact, I think that Home Assistant should not even advertise any Raspberry Pi as a recommended device for installation; unless booting from SSD that is); it is true that a MicroSD might have worked very well for the past 7 years (Happy Birthday ) but it will likely fault at the most inconvenient time;
price/performance ratio difference between Raspberry Pi (even for V4 with 8 GB RAM) and Intel NUC is hugely in favor of the latter (sole reason to use a Raspberry Pi, considering the above, is only if using a SSD to boot from); the imminent danger of losing the entire HA is simply not worth the price difference between a Raspberry Pi and an entry level Intel NUC (which might be very, very thin, considering the additional costs for Raspberry Pi such as MicroSD card/SSD for boot, power supply and case);
HDDs are going to bring a lot of performance drops to a low power platform such as Raspberry Pi or even to an Intel NUC (as a matter of fact, the most important performance improvement from any PC platform launched within the last 15 years occurs after upgrading HDD to SSD);
I/Os required to destroy a SSD are beyond what the average user would be able to keep track off, being in the range of decades (and would, most likely, surpass most other electronic components). Urban legends keep claiming that HDDs are superior to SSDs in terms of reliability; however, there is one significant issue with this: there are no extensive benchmarks as the number of users (including data centers) using SSDs is significantly below the number of users of solely HDDs based products. As such, even if SSDs fail (which is most likely than not due to external factors such as spilling coffee on the device or overcooking the internal components due to bad air flow) they are reported to a much lower base.
Uhm. A lot of high quality MicroSD cards do wear leveling these days. They’re far from the time bomb they used to be. And even older high quality cards could last a very long time if you treated them well (good power supply, keeping db writes in check). What some people present as facts about SD cards is just plain fear mongering in many cases.
In short - if you use a Raspberry Pi with a high quality SD card, together with a high quality PSU and a sane recorder config, you’ll be perfectly fine for years to come. And of course, in the event things go wrong, always make backups.
Your topic title is misleading, let me correct it for you.
"I am having issues with SSDs and SD cards failing, any ideas why?"
I’ve had an SD card install running on a Pi3 at my parents house for likely over 2 years now, not an issue. It used to run Hassbian, now runs Supervised. Same SD card.
I have SSD (cheap WD drives) installs running VMs currently, (previously running Ubuntu+Supervised on the same SSDs) in my home and business for probably 18 months to 2 years, never an issue.
If you’re topic title was true, then the forums would be flooded with people saying the same thing about dying SSDs - which there is not. SD cards will fail if the recorder is not well set up, that’s well known.
Why??? Why do you care about storing all these states in the database? How long is your retention time? I exclude pretty much everything and use 2 days…
@Valentino_Stillhardt’s experience is legitimate, his SSD died. But it is doesn’t translate without more evidence to a general tendency for HA to to fry SSD drives.
We all know that SD cards are prone to failure. And there are some precautions to take:
Honestly, this is something I CONSTANTLY remind people about: sudo shutdown -h nowBEFORE pulling out the power (unless the system is hosed up, obviously).
I’ve killed over the years at least two SD cards before moving to a NUC SSD setup.
That SSD gave up after about 3 years (although I don’t know how old the drive really was).
What I’ve done since (after obviously replacing the drive) is to move the database to the RAM.
I’ve added further 4gig and reduced the recorder to something like 2d … which adds up to ~3gig database size … yes it’s gone after a reboot, but that happens only once every year or so …
Well, I suppose it’s good practice to limit things anyhow.
So, a bit more pragmatic; What/where are some good documentation/settings on limiting the factors for unnecessary retention/wear/“insert synonyms in the vicinity of said subject”?