HASS OS 6.0 VDI => After Upgrade fails every boot: Failed to start grow file system on /mnt/data

Hello there, I tried you doing this procedure, but is it normal that I’ve been pushing enter now for over literally 30 minutes… and still not done…

Nevermind after 40 minutes holding ENTER everything is working fine!

Thanks again!

Press “A” for All next time

So what does fsck report?

Also if this is a problem in haos, it won’t even be looked at by a dev unless a github issue is posted.

As a last resort I replaced my NVMe drive with a new Samsung M2 drive. I cloned the NVMe and then ran fsck to be sure. It is showing exactly the same problem with Home Assistant. Interestingly their is a new error message which relates to the PCI driver for the M2 bus. I used a fix I saw on the internet by using

  1. cp /etc/default/grub ~/Desktop
  2. Edit grub. Add pci=noaer at the end of GRUB_CMDLINE_LINUX_DEFAULT. Line will be like this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer"
  1. sudo cp ~/Desktop/grub /etc/default/
  2. sudo update-grub
  3. Reboot now

This got rid of the pciport error. Maybe it is still the issue?

Heaven knows why you copy the file to the desktop before editing it, but whatever :slight_smile:

More importantly, pci=noaer just suppresses error messages, it doesn’t fix what is causing the errors.

Following these steps seems to have resolved it for me. I’m a little nervous about the state of my VM now, but it is back up and running.

Thanks for sharing this info!!! :+1:

The fix mentioned in this thread fixed the issue I was having… however, unfortunately Supervisor wont startup after I “fixed” all the broken files.

Apologies in advance for my OCD…

Does this procedure leave any orphaned files after we are done pressing Enter several times?

Even though I’m not getting prompted with “FAILED failed to start Grow…” anymore, I can’t help but think the file system is never going to be the same as before.

After doing this procedure (and also upgrading to the latest version of HassOS), will the respective container and respective filesystem be back to virgin?

Coming up on a year later and this is still a valid workaround.

Just found this thread as Im seeing the exact same issues on the latest install of HA. The fix in OP worked for me - thank you @Tscherno

Hello everyone. Let me introduce myself, I am Claudio, I have a problem with home assistant. I also have a similar problem.

I’ve executed the commands:

  1. journalctl -xb | grep -i error
  2. journalctl -xb | grep -i failed

I get the following screen.
What could it be? I am not very familiar with this

1 Like

This worked just fine…

For the ones who say this didnt work… Follow each step. You cant do one before the other.

Thank goodness this worked! This is so uncool! I expected things like this to happen back in the day when HA was still highly developmental, but these are suppose to be stable builds. To make this big of an error in the OS coding is not cool!!!

HELP. My VM VirtualBox on Win10 says I’m in RESCUE mode. I tried Tscherno’s recommendation with no luck. At umount, it says target is busy. If I continue with next line, VM crashes and reboots to SLOT B RESCUE SLOT. Screwed and confused!!!

The steps don’t work for me. The first one does, but then with step 2, I get the following message: “sh: unmount: not found”

umount not unmount

Thank you! Amazing how the brain fills in the gaps automatically. Unfortunately, after fixing that typo, I ran into a different error. I eventually had to give up and restore from a backup.

In rescue mode after entering in maintenance after command systemctl stop systemd-journald still not possible to unmount - target is busy.

Any help?

Hi all,
Im stuck with HA non starting but I cannot stop the systemd-journal as it spawn by itself immediatly after the stop.

I tested this to stop all the dependencies with no luck:
systemctl stop systemd-journald-audit.socket systemd-journald-dev-log.socket systemd-journald.socket systemd-journald.service

What’s wrong? Any help is appreciated!

Regards and Marry Christmast

wasn’t my issue, and i think there’s a service doing that now anyways, but for me it was the swapfile preventing me from unmounting it.

So if you get the target busy message:
stop journald and the sockets that keep reactivating it, and then run:
swapoff /mnt/data/swapfile

and see if that makes it possible to unmount it.
But like i said - did not fix my issue, and i haven’t figured it out yet.