HA corrupted another SD card

Hi all,

I’m a bit frustrated about the fact, that Home Assistant corrupted already a second SD Card (read-only mode, no erasing possible) within less than half a year! The more annoying it is, considering the card is not a cheap one, but a Samsung EVO Plus 32 GB.
To be honest, I don’t want to buy a new one and replace it after another 2-3 month. Is there anything I can do to prevent HA from destroying my SD Cards?

Thanks!

Don’t use SD cards (there are countless other options) or move your database to an external server.

1 Like

Thanks for your reply, @tom_l

I’m not very familiar with a rPi and I just followed the instructions from the main page:

It was recommended to use an SD card and so I did. But it would be no problem for me using a USB drive (if possible), too. Is the installation process on rPi the same as for SD cards? I will definitely give it a try if it lasts any longer.

Where may I read more informations about moving the database to an external server?

Search the forum for running from a USB. You can also run HA on an old laptop with an SSD or HDD.

The recorder section of the docs has a some information on using an external database.

2 Likes

The alternative install method has, of course, been depreciated so your options are limited to getting HW with eMMC or using a VM.

1 Like

You can install in a Python virtualenv on a standard Linux installation. I’m running on a Banana Pi M1 with the core Armbian OS on an SD card but with /home (including all the HA files) mounted on a 1TB laptop HDD connected via SATA — this method will work on pretty much any system, though.

Not sure how HA can be responsible for corrupting an SD card mounted in read-only mode though…

1 Like

Thanks for all the replies!

@tom_l
I’ll try to get it boot from USB, not sure if it’s that easy on a rPi4. I quickly searched for this method and noticed it was only possible on older Pi’s, no idea if there is any progress in the meantime.
Finally to make it sure, if using a USB flash drive it shouldn’t get corrupted that fast like my mentioned SD cards or do I really need at least a HDD/SSD?

@baz123
Sounds complicated, but thanks anyway.

@Troon
At first I tried with Python, but it was a bit too complicated for me. So I decided to run HA on a Pi, because of the easier overall maintenance. Do you maybe have an instruction how to run HA on rPi, but using the /home directory from an external storage? Maybe even SMB or NFS (I have a NAS at home)?

HA corrupted my SD card which was fine, but now is in read-only mode (which I cannot remove or even erase the SD card).

You can also use a new HassOS feature : offloading the data partition to another disk.
You flash HassOS 4.6 beta to a SD card, connect an external USB SSD, boot your RPi and wait for HA to install. Then using CLI you can move the data partition. Doc is here:

I have not tested this but I will need to soon as eMMC boot for my board has been deprecated…

2 Likes

@Xlaink make sure you buy a high quality SD and that you have a proper, stable power supply on your PI.

1 Like

Either low or high quality SD card will not solve the inherent flaws in SD cards that causes this to happen. The only solution is don’t use one.

4 Likes

Sure, there are alternatives that are arguably better, but I have several Pis, all with SD cards only. One of these is a model 2 that’s been running a Plex server for many years. It does the transcoding on the SD. I’ve never had an SD fail on any of my Pis. Perhaps I’m just lucky then. Also, backups: any hardware can fail.

1 Like

I used a SSD with my rpi3

I followed this video

5 Likes

I had the same problem until I added the following to the configuration.yaml. I believe the problem was fixed by setting the log to use memory, not the SD card which constantly gets pounded. The downside is that the log gets wiped upon reboot.

logger:
  default: critical
  
recorder:
  purge_keep_days: 5
  purge_interval: 5
  db_url: 'sqlite:///:memory:'  # Set log to use memory, not SD card
3 Likes

I think there are a couple of things that you can do. I have some SD cards that have been in use for several years without problems.

You can currently get a “Kingston SDCE/64GB High Endurance microSD” for 14€ on Amazon.

Now one important trick is to underprovision the card. You’ll never use 64GB (normally) just from using HA. So partition it to use 16GB only. That will make sure that 3/4 of the card is available for the card controller to do write balancing. It will greatly help to extend the lifespan. If you want to go further, buy a larger card (128GB is only 27€) and provision only 16GB of it. There’s no way you will be able to wear out that card with normal use.

Secondly you should absolutely disable swapping. If swap is needed then there’s something wrong with the setup. So just uncomment the swap in /etc/fstab and reboot.

As I said, I have several microSD cards and I’ve only ever had one or two go bad over the last 10 years or so. But of course they can always fail, so backups are important, just as with hard disks and SSDs as well.

3 Likes

Thanks, I’ll look into it.

I have a proper and stable power supply, not just an USB cable. Is the mentioned Samsung EVO Plus 32 GB not a high quality SD card?
I have the same one in my dashcam (only the 128GB version) and it already wrote multiple terabytes of data (in 3min sections of about 1GB each) without any issues.

That’s a good point, thanks! What I cannot understand is, if it’s known that SD cards are that bad / unreliable using them for HA, so why not just adding a warning to the instructions? Maybe even adding those notices to help user to disable logs or similar and so extending the SD card’s life.

@seidler
Thanks a lot! I’ll look into it when making my next try.

1 Like

Another thing: to check whether it’s really many writes that “kill” the SD card, you can log in to the RPi after it’s been running for a while and type iostat -m. That will give you the MB written since the last boot in the MB_wrtn column of the /dev/mmcblk0 line. So if you check after a day of uptime and it’s higher than, for example, 10000 (10GB), then there’s a problem that some process writes too much to the SD card. If it’s low, then that is not the problem.

4 Likes

Or turn off history and logging.
Generally these are used to produce pretty graphs that people don’t actually use.
You can enable history / logging for specific items but anything that cuts down read/write cycles will stand you in good stead.
Note though ssd’s have wear evening, they too suffer from read/write failures too. The difference being a) they have more space to wear even b) there write numbers are a LOT higher.

@Xlaink
So ha did not corrupt your card, you did

Edit: I now run with an ssd, but my old sd installation ran 2yrs with no problem, I now use that sd card in a music player too.

2 Likes

@seidler
Thanks, I’ll test it out later, now I cannot boot from the read-only card.

@Mutt
Yes, technically I have corrupted the SD card, but if it’s known that the use of SD cards can result in such troubles, it is worth to mention a warning in the manual (IMHO) and some tips on how to prevent it (like disabling the logging or history). Because I wasn’t even aware of the consequences.

2 Likes

I sympathise, and understand where you are coming from.
And there is a proposal to change the default installation to turn off such logging to maximise the life of cards. (as this seems to be a recurring theme with newbies).
You make necklace from all dead sd cards :rofl:
Not sure whether this will happen soon or ever.
But also, what manual ?
To me the manual is this forum and how many people read the manual anyway ?
It’s only when you have an issue do you resort to RTFM and then you will find that there is a rash of threads covering this topic.
It’s a difficult problem to address but even you raising it proves there is no one fix solution.
How would you have warned a newbie ? (genuine question)

Edit: now having a ‘bloody nose’ I assume you will take snapshots after every major config change, and store them off the HA instance. We all seem to have to learn the hard way :man_shrugging:

1 Like

Indeed, @Xlaink, it was only well after I posted that I saw that you are using a high quality SD. I can’t really think of other reasons why you would experience such a high failure rate. I don’t know what low-level tools exist for this purpose, but perhaps try to measure the write activity against the spec of your SD. The cells of any type of solid state tech can only be written to a finite number of times.

I think @seidler is making some good suggestions. Similar to the partitioning trick, is if you do partition all of the available space, you’ll run into the same issue if it’s constantly near full.

As for my own setup, I also have a NAS where I would typically store larger volumes of data.

In the same vein to what’s been said, here’s something interesting to read.