I was wondering if this would be possible.
My understanding of swarm is that it links several servers together using docker.
The application is run within that.
So Yes. You can run HA in swarm but for what purpose?
Resource needs of HA are very low so no real benefit
This would increase the availability, so that if one of the nodes were to fail, hass would still be online
Yeah…had to refresh memory on swarm.
But Yes, I believe any app may be run in swarm. This is native to docker and app independent.
So if I were to run hass on a 2/2 swarm (2 managers and 2 workers) theoretically everything should work fine?
Yes. Should work well.
Also very good idea. I forgot about this and never considered it.
Thanks! I’m going to see if I can find any used mini-pcs online.
What will happen if HA tries to update a switch? Will both instances send a command, or does docker allow only one to send output?
Actually, I didn’t think about that. I need to do some more research…
The application deployed to swarm and swarm manages it’s state. You don’t technically run multiple HA instances. You have a single instances who’s desired state is maintained. If it fails, it is killed and new working instance built. If it is changed, the new state is updated and started while old state is stopped and destroyed.
Also I think off number managers is recommended for some reason. So 3/2 is suggested.
Oh, ok. I understand now.
If you try it out let us know result.
I have heard other HA user use Portainer. This can manage swarm deployments but not sure if they used for this purpose or just container management.
So how is this different from a docker container with restart always
parameter?
Swarm assume multiple host servers and creates a type of redundancy between these hosts
Restart always just restart same container on same host.
Swarm will bring up container on another host in case of single host server fail or networking go down. Also you can make Port, container config or storage changes and implement them with almost no downtime since it will create new container with changes and bring that up as old config container taken down. You can have host at multiple physical location. Maybe cloud and local Host as well I would assume.
There some details and application I am likely missing but general idea is there I believe
I can’t seem to find a way to pass USB devices to the swarm. Is this possible?
How do you propose this would work?
Do you have the same identical USB device on every docker host in the swarm? It should just be a /dev/ttyACM0 or similar in a bind mount.
So it isn’t possible without the same device being attached to every node?
Do you understand how a swarm service works? If host A goes down and has the hardware device, how do you propose it physically move the device to host B?
This is just simple logic, not really a fault with the system
I do understand how a swarm works. My thinking was that it might be possible to share a peripheral off another non-swarm machine.
That’s an interesting concept, and one that seems to still miss the point of a redundant system. It is a single point of failure, so I’m not sure what good a swarm would do in this instance.
I don’t know of any way you can share out a /dev/ device directly.