How to make Core more attractive - my 2ct

Maybe I have a naive idea of how to publish software updates, but I will at least tell you how I, as someone who has nothing to do with programming professionally, come to these thoughts.

My HomeAssistantCore runs on a RaspberryPi and since I have an ARM Mac, I start my VMWare VM with Debian for HomeAssistant updates, update this HomeAssistant Core instance, start it and when everything is done, I make a TAR archive out of it and then move it to the Raspberry Pi and start it there. The last parts are scripted.

Advantage of this procedure: It is very fast and HomeAssistant runs almost without major maintenance. The housemates say thank you.
On the Pi itself a start after an update takes very long depending on the release.

HassOS is advised everywhere and Core is discouraged, but if released this way, many more people would go to Core as well. A lot probably donā€™t need most of HassOS at all and like me would rather have their system under their own control.
ā€¦without Docker.

The HomeAssistantCore installation is 150-200MB as a compressed TAR archive and can be unpacked on a Pi in under 1 minute.
Now imagine you could stop HAC, download the update (compressed installation) from Git in a few seconds, and resume HAC a minute later!
I imagine that would make HAC very attractive to many and the simplest installation method.

as I said, perhaps naive to think it could be so easy.
At least I wanted to get rid of it

I havenā€™t measured it, but I believe SSH-ing to my HAC, switching to the venv and then executing the upgrade via pip is even faster. :man_shrugging:

What exactly is your rationale for avoiding Docker? It removes or simplifies many of the steps you mentioned; For example, when I want to upgrade my Home Assistant containers (and its friends in my case, i.e. zwavejs2mqtt, zigbee2mqtt, mosquitto), my process looks like this:

  1. docker compose pull (this retrieves the new/changed image layers for my containers, without stopping them)
  2. docker compose up -d (this restarts only the containers that need updating, and they start using the updated images right away ā€“ usually takes around 30 seconds in total)
  3. (optionally) docker system prune (to clean up unused images and free up storage)

This could be even further automated but I prefer to choose the timing and to be around to make sure my integrations/dashboards still look happy after an update.

The venv contains a lot of platform specific compiled code that depends on the processor architecture and the installed dependencies on the host OS (libs, glibc version, kernel version, etc). A lot of Python packages are actually C++ or Rust under the hood with a thin Python compatibility layer on top. They would need to release an unmanageable amount of permutations to cover all cases. Docker simplifies this by providing a controlled dependency environment.

I think the power of Core is that I can easily compile it on my own system, with my own dependencies and often with patched HA code. It even provides some precompiled packages as part of the pip wheels, if you want that. Updating Core is just a simple pip command. Releasing tarā€™ed binaries would negate all that.

Additional overhead that adds nothing useful (at least for the type of users Core targets).

Containers will always add overhead, no matter what.

The term ā€˜significantā€™ is very much dependent on your situation. On an x86 system, the overhead would be insignificant. On a Pi3 with 1GB RAM total, it can be very significant.

Subjective. Maintaining Core can be extremely easy and quick, depending on your setup and skill set. Thatā€™s why I mentioned ā€˜at least for the type of users Core targetsā€™.

Edit: not in the mood to start yet another pointless internet argument.

The overhead costs in terms of memory/CPU of the Docker daemon itself are negligible, even on a Pi 3. Apart from that, your containerized processes will consume resources just as they would otherwise. Avoiding it purely on that basis (and taking on all of the additional complexity of deploying applications like this from scratch) is, in my opinion, misguided (there are better uses of your time, Iā€™m sure). You can even run the same containers using something like podman instead of Docker, and avoid the need for a daemon entirely.

(And if weā€™re bringing in credentials: I, too, am a software engineer who works with containers daily, I started working with Docker professionally about 12 years ago, and Iā€™ve been using it to deploy my ā€œhomelabā€ services on low-power ARM boards for years.)

Itā€™s not about the daemon, itā€™s about dependencies, specifically shared libs. Since itā€™s very unlikely that all Docker images will have the exact same version of every .so used by the container as my host OS (or even as other Docker containers), the libs cannot be shared and they will have to be loaded as (slightly different) duplicates. If on the other hand I rebuild Core myself, as long as the API matches, I choose what dependencies I build it with and forcing everything to the same lib revisions, all can be shared across processes with my host OS and other stuff I might be running with them. And that saves memory. Quite a lot actually if you are on a resource limited SBC.

Sure you can.
Or you use the time for some paid overtime, buy a SBC with more RAM and spend the rest of the money for a dinner with your wife. :wink: