Booting into shell after moving image on Unraid VM

Hi everyone,

I am facing an issue after moving my HA image from a parity disk to my cache disk on Unraid. This worked quite easy and was done in order to not having the parity check negatively affecting the performance of HA.

fs0:

brings

'fs0:' is not a valid mapping.

I then performed the following (read in the Unraid forums):

modprobe nbd max_part=8
qemu-nbd --connect=/dev/nbd0 /path/to/the/image/ha.qcow2
mkdir /path/to/the/mount/point/
fdisk /dev/nbd0 -l
mount /dev/nbd0p1 /path/to/the/mount/point/

and the output is:

Disk /dev/nbd0: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A4E33CCF-98F3-43B5-B08D-6F37CFF42D17

Device        Start      End  Sectors  Size Type
/dev/nbd0p1    2048    67583    65536   32M EFI System
/dev/nbd0p2   67584   116735    49152   24M Linux filesystem
/dev/nbd0p3  116736   641023   524288  256M Linux filesystem
/dev/nbd0p4  641024   690175    49152   24M Linux filesystem
/dev/nbd0p5  690176  1214463   524288  256M Linux filesystem
/dev/nbd0p6 1214464  1230847    16384    8M Linux filesystem
/dev/nbd0p7 1230848  1427455   196608   96M Linux filesystem
/dev/nbd0p8 1427456 67108830 65681375 31.3G Linux filesystem

So it appears to not be broken. After checking that, I unmounted and disconnected using:

umount /path/to/the/mount/point/
qemu-nbd --disconnect /dev/nbd0
rmmod nbd

Do you guys have any idea what to try without losing all the data/config? Especially ZWave is what took soooo much time. :frowning:

Thanks a lot in advance guys!

Solution