How can i fix my install?

I’m trying to install HA OS on truenas scale, but keep running into this error

I created a zvol of 35 gigs, can someone please guide me on how i can fix it? whys is it already out of space?

Following this guide, Installing HAOS in a VM on TrueNAS SCALE

@troy could you help me please!!!

Sorry you’re having trouble, David.

Looking at the screenshot of your ZVOL, it says the volume size is 100 GB and you appear to be using only 5 GB or 4% of that space.

In the first screenshot, it looks like your VM may not be able to reach the internet. I see 404 errors saying image not found. I’ve seen reports on the TrueNAS forum of people with similar issue, having a VM unable to reach the internet. (Reports are with Debian and Ubuntu so not necessarily related to HAOS specifically) Unfortunately, I am not proficient in troubleshooting this issue since I have not encountered it myself.

If you are a member on the TrueNAS forum, you might have a better luck and quicker help posting over there since there are more TrueNAS experts.

Have you created a bridge for your VM?

It seems to be one reason some people do not have internet to their VM

For example…

Thanks for the reply! really appreciate it, i was able to ping yahoo.com from the CLI

Wouldnt this mean that the internet is working?

1 Like


I am also getting this…


I did df -h less, why is root taking up 100%?

I assume you got to that CLI by clicking the display button for your VM in the TrueNAS UI? So, I guess your VM is online and DNS is working if you are able to resolve and ping Yahoo…

I see in your screenshot now… process fix up block from execution, not enough free space.

I’m not sure what we are missing here…

I’m on the road for work again this week but when I get done for the day and back to my hotel with a real computer, not on my phone, I’ll try to remote in to my home assistant and see how my df -h looks

1 Like

So I can connect to my HA and reach the ssh addon, but I’m unsure how to get the HAOS CLI.

From my understanding, I can disable protection mode and gain access to the Home Assistant Container (or other addon containers), but I don’t know how to get to the actual HAOS CLI - I think that’s where I need to run df -h from.

I’ll keep thinking about this… but might have to wait until I get home (Thursday).

cant you just click the dislay link from within the virtual machines?

I could, if I was at home… I only have access to Home Assistant through Nabu Casa when I’m on the road. I don’t have remote access to my TrueNAS server.

1 Like

Hi @david1 - I’ve made it home now.

Here is my df -h – Looks like the root is full, just like yours.

Maybe that makes sense because it’s a container.

Welcome to Home Assistant OS.

Use `ha` to access the Home Assistant CLI.
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root               194.9M    194.9M         0 100% /
devtmpfs                  1.9G         0      1.9G   0% /dev
tmpfs                     1.9G         0      1.9G   0% /dev/shm
tmpfs                   784.0M   1020.0K    783.0M   0% /run
tmpfs                   784.0M   1020.0K    783.0M   0% /etc/machine-id
/dev/vda1                31.9M    712.0K     31.2M   2% /mnt/boot
/dev/vda7                84.5M     53.0K     77.7M   0% /mnt/overlay
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/dropbear
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/modprobe.d
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/modules-load.d
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/udev/rules.d
/dev/vda7                84.5M     53.0K     77.7M   0% /root/.docker
/dev/vda7                84.5M     53.0K     77.7M   0% /root/.ssh
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/NetworkManager/system-connections
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/hostname
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/hosts
/dev/vda7                84.5M     53.0K     77.7M   0% /etc/systemd/timesyncd.conf
/dev/vda8                62.3G     17.7G     42.0G  30% /mnt/data
/dev/zram2               15.0M     72.0K     13.8M   1% /tmp
tmpfs                     1.9G    312.0K      1.9G   0% /var
/dev/vda7                84.5M     53.0K     77.7M   0% /var/lib/NetworkManager
/dev/vda7                84.5M     53.0K     77.7M   0% /var/lib/bluetooth
/dev/vda8                62.3G     17.7G     42.0G  30% /var/lib/docker
/dev/vda7                84.5M     53.0K     77.7M   0% /var/lib/systemd
/dev/vda8                62.3G     17.7G     42.0G  30% /var/log/journal
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/4eb0a084db4eb7784da757df3884d1907adb9edc082c0eeca8412d26706fb1c0/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/4eb0a084db4eb7784da757df3884d1907adb9edc082c0eeca8412d26706fb1c0/merged
192.168.10.5:/mnt/rust/haos-backups
                          4.9T    765.0M      4.9T   0% /mnt/data/supervisor/mounts/backup
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/0d5894c8dfda71b7bcba3eaeb5af0844b3448a426f4aef7467ae63e60c373ba7/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/0d5894c8dfda71b7bcba3eaeb5af0844b3448a426f4aef7467ae63e60c373ba7/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/8619fb183bba190af9239065896e23be471118143ee36477b98f1c4b1efaf474/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/8619fb183bba190af9239065896e23be471118143ee36477b98f1c4b1efaf474/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/94a5af6919a798eff1df6b9363b13304ff259d485fc66117e026b5db8004ca07/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/94a5af6919a798eff1df6b9363b13304ff259d485fc66117e026b5db8004ca07/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/e74b01143f313eba55ded5fd7a9deb49d10132e6ff8108e0d3e96966533fb440/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/e74b01143f313eba55ded5fd7a9deb49d10132e6ff8108e0d3e96966533fb440/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/f7c0cfe50f88514cd1cfa79a9c39d547ae2004371b6cf2ad9a82ed0f2442cf8c/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/f7c0cfe50f88514cd1cfa79a9c39d547ae2004371b6cf2ad9a82ed0f2442cf8c/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/7ee8295543b706e957a9d6b8d114ff214401143611382bcfccbaf7691f8ad9ab/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/7ee8295543b706e957a9d6b8d114ff214401143611382bcfccbaf7691f8ad9ab/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/dec7652da87d4980131e6af7ecd5b27a6ac8d10088d6d0c04f76db76c27a4d7f/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/dec7652da87d4980131e6af7ecd5b27a6ac8d10088d6d0c04f76db76c27a4d7f/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/552592385ccd7d5c406c77213a51fa5285f3e847dd44cddaf5eb449d3dc1b3e2/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/552592385ccd7d5c406c77213a51fa5285f3e847dd44cddaf5eb449d3dc1b3e2/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/8e64a54dec16228063dc1c07906da0e16d0dcef5f270629f08efd4716d83bbe4/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/8e64a54dec16228063dc1c07906da0e16d0dcef5f270629f08efd4716d83bbe4/merged
overlay                  62.3G     17.7G     42.0G  30% /mnt/data/docker/overlay2/fd86b9975fcf2ca7408c5b8978a6ac6c9321a8d7c6593b29dbdfab2d00f631dc/merged
overlay                  62.3G     17.7G     42.0G  30% /var/lib/docker/overlay2/fd86b9975fcf2ca7408c5b8978a6ac6c9321a8d7c6593b29dbdfab2d00f631dc/merged

Since we can now compare with my VM, I notice that your ZVOL may not be the correct size. I realize your Zvol Space Management screenshot shows a Volume Size of 100 GiB, but something is not adding up.

Here’s a full screenshot of my ZVOL details - It’s 64 GiB

When I look at my df -h - I think most of the ZVOL space shows up in HAOS under /mnt/data

/dev/vda8                62.3G     17.7G     42.0G  30% /mnt/data

Looking at your df -h - It appears you may only have 1.2 GiB, possibly indicating an undersized ZVOL


If you’re still stuck at the beginning, maybe consider removing your current ZVOL and starting again. Also, be sure the path is correct when writing the image. (I’ve seen people mistakenly use the /mnt path, which is not correct) To be clear - the exact command to write the image to the ZVOL in your screenshot would be

sudo qemu-img convert -O raw haos_ova-10.2.qcow2 /dev/zvol/Data1/HAOS
1 Like

Sad face, perhaps i need to use esxi or something…, this is with a brand new zvol


What i could do before i cant seem to select the pre-exisintg image :frowning:

This is very strange but has nothing to do with HAOS itself

A ZVOL should show up as soon as it’s created, even if it’s completely empty.

Can you share a full screenshot of your HAOS ZVOL details?

If you make another ZVOL (you can just create and leave it empty) does that show up in the list?


We’re getting so close to having this working! Don’t give up just yet

You’re more optimistic then i am!


i suspect you have a IP6 problem, can you try again with IP6 disabled??

3 Likes

I restarted and they both appeared, giving it another test! :).

UH So, i did the same thing the exact same and it seems to be working!

1 Like

Good Luck !!

:crossed_fingers:

I think it’s gonna work now

1 Like

thanks man!!

1 Like