HA in a swarm of Pi: how to solve the shared FS

Hello,
i’m testing in these days swarm cluster on my Pis. I like the idea to have at least another Pi ready to run my HA installation in case of any problem on my first Pi.
The main problem I am facing is the share filesystem. I don’t have any external hardware to use ad FS in common between my cluster nodes so I was trying to figuring gout how I could solve the problem.
At the moment I just thought to use rsync to keep the local FS synchronised but I don’t like very much this idea because it would rely on crontab tasks.
Do you know is there is anything that could help me?

Thank You

GlusterFS is easy to setup. Actually i don‘t think that HA can run multiple instances in a swarm as at least automations would collide.

I’m just moving first steps in swarm so I really don’t know if it is possible but I would be happy to run just a single instance with a VIP IP like a “classic” cluster framework.

The question is do you really need a shared storage? I run home assistant in kubernetes and download the config from git using an init container. This is a quite reliable solution…