"Symbolic link loop" error with backup folder

Probably a long-shot, but I’m wondering if anybody has experienced something similar…

I have two VMs - one with HASSOS and the other with Node Red running in a docker container (ie. not the addon). I have the Samba Share addon running under HA, which shares “backup”, “config”, and “share”. And I have the Google Drive Backup addon doing backups every day at 6am.

I mount these three volumes on the second VM using autofs. Then I make them available to the Node Red container, for example mapping /mnt/remote/ha/backup to /ha/backup. I have Node Red write some stuff to “share” at 5am every day, and read “backup” every day at 8am to analyse the content of the backup files.

After a while (weeks?), the read of “backup” starts failing with “symbolic link loop” despite it containing no symlinks (although possibly the container mount or CIFS mount behaves like a symlink). At this time, if I use Portainer to connect to the Node Red container, doing ls /ha/backup gives me the same error, but not for “config”. However ls /mnt/remote/ha/backup works fine on the VM, so it’s only an issue with the container. If I restart the container, it’s then fine for a while again. I probably should have tried stat /ha/backup/* to see what I get, but I’ve restarted now.

I’m tired of restarting the container. I only started monitoring the backup size because one of the addons went crazy and filled up my Google Drive, and I’m not even using that any more. Any thoughts before I just abandon the idea?

I’m suspecting this occurs when the whole machine restarts. In that case, both VMs are auto-started and HASSOS takes much longer. So when the second VM starts the Node Red container, the share doesn’t exist yet. When the share later comes online, it seems to tidy up at the VM-level, but not for the container. So arguably I could test for the symbolic link loop in Node Red and have it restart its own container. Or just get over it.

So fingers crossed I haven’t had any crashes for the last couple of days. I upgraded VirtualBox to 6.1.40, and this may have fixed it, although probably just rebooting the host did enough.