Did home-assistant kill my SSD?

Hi everyone!

I am user of Home-assistant for almost 2 years now.
This summer, after having hassio running on my microSD for something like 6 months, it suddenly killed my microSD…
I knew it wasn’t safe to run it on microSD, but I wouldn’t going to expect it to ruin the SD card in 6 months… Anyways, I then used a PNY SSD since July.
What did happen this morning? Home-assistant was off when I woke up, I was wondering why, and figured out the raspberry wasn’t able to boot on the SSD anymore…
Tried to plug the SSD to my laptop to reset and backup my hassio setup, but laptop isn’t able to recognize it…
Since July, Home-assistant OS killed one microSD (wasn’t a great quality one, but still, 6 months…), and a PNY SSD which was 2 years old and last from July to January…

On my desktop computer, I have an old Samsung 840 PRO, which is now 11 years old, and I don’t have any trouble… Looks like hassio did more IO in 6 months, than my playing desktop in 11 years…

Is anyone else encountering this kind of trouble?..
I am now scary about buying a new one, specially if it is going to last only a few months…


And what does SMART data say about the SSD?

Need far more specs on this to be able to give you a feedback and no wild guesses.

  • Size
  • TBW
  • designed TBW

Somewhat expected as HA is database driven. Taking other considerations into account to like like write amplification which can degrade flash storage very rapidly :put_litter_in_its_place:

As a rule of thumb buying flash storage: Don’t buy no-brand/white-brand/low-brand but only stick to one of the last 4(?) companies which are also flash manufactures :warning:

As a note: HaOS had quite some flash killing defaults from the (spinnig hard drive era) till not long ago which should be a little bit flash friendlier by now :hugs:

Yep… unfortunately a known Home Assistant evil. And unfortunately the focus is more on adding features (not that Matter does not matter), while the foundation is shaky to put it mildly.

In your next attempt you could try to reduce the wear on your SSD by implementing some of these configuration changes mentioned here. But as you are running HASSIO your options are limited and you’re basically screwed.

James Chambers has a write-up with references to some housings and SSD’s, should that be of interest.

Another option that may help is to move away from a RPi altogether and host your implementation on a “thin client”, as described by Andreas Spiess, although I’m not sure how that will help.

Here is a data-point from my experience…
I installed HA (Python venv Ubuntu) on a dedicated NUC in March 2018. It has a SanDisk SSD PLUS SSD Model: SDSSDA-240G-G26; NAND Flash. In mid-2022 I got my first filesystem error (on /dev/sda2) which I fixed (using fsck). It didn’t happen again for several months, then it became more frequent (every few weeks, then every few days). In December 2022, I moved over to a new system.

So, I got around 4.5+ years with mine (again this was HA in python-venv instead of full blown HA)

The most important question: What was your commit interval on the filesystem?

Armbian for examples uses:

  • commit=600 to flush data to the disk every 10 minutes (/etc/fstab)

which allowed me to have sd cards in SBC’s for over 5 years with no failure :raised_hands:

Thanks all for your replies!
It looks like this is a real subject and I am not the only one.

I had a mSATA available, so I restored HA on it, but since that, I have trouble with ZWave…

This is the only integration with trouble, all the rest is OK and working.
I also have an other weird front trouble, modules seems to be deactivated on the page, but if I check the logs, they seems alive…

I found a similar trouble on the forum, but one year old with version 7…

Electronics can also fail with no other reason than a weak component. It can be a small 0.0001 dollar chip capacitor that failed and shorted. It is not always wear that kills an SSD. Like most LED bulbs claim 50000 hours but often die because a capacitor getting warm wears out after 5000 hours.

The general thing with SSDs is that wear levelling only works if the free space is large so data can be moved around. So even if a HA only needs 16 gigabytes, it is a good ide to have an SSD 128 GB or even bigger. Then the log files and database files can move around and not wear each memory cell as often.

1 Like

Mine also died today after 4 years. Everything was working smoothly till the upgrade to 2023.1 on Thursday.
Friday morning the system was down and I had to power cycle (this never happened before). Saturday the SD was dead.

Installed on a new SD, and every 20 minutes or so it gets stuck and I have to power cycle. Not sure what to do…

1 Like

Thanks to this thread Addons not connected after migration to SSD I fixed my addon trouble!
A simple hard reboot of the rpi…

Tomorrow, I will check the links sent by @orange-assistant :crossed_fingers:

Also are you sure your “laptop” is able to read the filesystem(s) which you are expecting on the SSD? My guess windows is still not able to make use of (or even only read) ext4 in 2023 out of the box :package: :thinking:

From what I can tell its the default 5 seconds on Ubuntu for ext4 filesystems.
Being curious, I did a fair bit of looking into this, and have to admit, it is not clear to me how this helps (not saying it doesn’t, I just haven’t figure out how). From what I read (at least for ext3/4), file data and its meta-data still goes to the flash drive, some or both go either to the journal, which also lives on the same flash drive outside the filesystem, or directly to the filesystem. The commit time is just one of the triggers that tells the journal to complete writing of info about the file to the file system itself.

From my experience, PNY is a rubbish brand so no surprise the SSD died.

I run HA on SSD datastore in ESXi, but with MariaDb configured on SSD volume on Synology NAS. As result I have no issues with SSD in ESX, but only after 11 months I got first notifications from NAS that the lifespan of SSD is reaching limits (currently at 3% only). After checking some specifications (i use Crucial SSD) i found out that these drives have only 300 TB of TBW. Given that HA writes to Maria DB with the sped of up to 6 MBps (which is ~0.5 TB/day) I should expect lifespan of ~20 months. As this SSD is also used for some other apps on NAS, this seems reasonable explanation of such significant wear…
The learning is to move DB to to spindle drives rather or use SSD with reasonble high TBW (like Seagate IronWolf with TBD of 1400 TB).

1 Like