I’m juggling a number of learnings here, and quite a bit behind the curve.
Kubernetes certainly seems were things are going, indicative by number of tools & other resources seeming to have native support baked into it.
I’ve been steeped in the POSIX space for quite some time (Linux, FreeBSD), doing things ‘the hard way’ - building by hand so that I’m fully cognisant of all the moving parts - but momentum seems to be moving to cloud & hybrid-cloud practices. (thinking out loud here, general terms)
I exclusively use POSIX on the server-side, but a mix of Linux & W64 on the desktop; spending most of my days in browsers & terminals, so choice of desktop interface is pretty arbitrary.
Windows, VS Code, Azure, OpenShift, OVz, Prox, mk8s all makes this an attractive & largely unavoidable prospect, and mostly implies that some of the hardest work will be in terms of my own thinking (I’m a solo admin, not employed in Enterprise where I get to learn from coworker DevOps)
Other technologies are in the mix too, so I’m trying to incorporate this into my mental models:
- CoreOS (and other immutable & ephemeral platforms)
- Resin
- Vagrant
- Clustering
- [adding others here later]
I know I’m making my life a lot harder for myself, but the reasoning is that this is a hands-on pet-project I can immerse myself in to gain familiarity to gain more holistic expertise.
Something I’m still having trouble squaring is something best described as master-slaves/server-client (gawd, I hate that term) models, where there’s a cental authoritative “Single Source of Truth” (SSoT), rather than a more meshed, distributed, fault-tolerant model (showing my own ignorance here).
My understanding is that Core is a subset Supervisor (?), with Supervisor managing the provisioning of apps & stacks on Core in containers, so in theory I should be able to use a single “master” Supervisor to manage Cores across Areas & Zones.
If either Super or Core becomes unavailable to one another, functionality should be able to carry on until connectivity is restored & sync resumed.
I’m splitting up my setup across distinct isolated networks, for security & performance reasons, e.g. have some heavy-lifting take place in the cloud - private on public - where I lack resources.
- What would be some of my architectural considerations?
- SQLite on my RPi Core end-points seem appropriate, but MySQL/MariaDB/PostgreSQL for super is advised (replicating data to off-site backup).
- Supervisor on public Cloud if expanding beyond my home-lab rather than punch holes in my network, and which is authoritative, if any?
- Can this distributed model be handled in HA, or do I need to stand up a hybrid-cloud with local & hosted Kubes?
- How to ensure data integrity - for my Supervisor(s) AND endpoint node Core(s)?
- Where do I have a complete lack of or misapprehension of what’s in play?
I know this is a big braindump; some of these I would popped in other chat forums, but local timezone makes such real-time comms difficult/impossible.
I’m sure some of this is also covered in documentation, so if anyone’s going to RTFM @ me, please do so with appropriate links to TFM.
[enough for now…]