Feedback requested: Deprecating Core, Supervised, i386, armhf & armv7

Developers need the supervisor also, so there will probably be developer guides to set up a developer environment for HA, but it will not be simple apt install commands.

I fully disagree about your statement “nearly the same as today”.

We 100% agree on your explanations which you make below. Everybody should make themselves the decision if this means no relevant change for them.

I believe many people will take wrong initial decisions, if been given recommendations like yours.

People need to read what it really means and really need to reconsider their choice with this decision. For many supervised users today, it will not longer be a reasonable choice. We should help people to consider bigger changes based on this proposal / upcoming decision. Don’ just keep them in their comfort zone with statements like “nearly the same as today”.

Then it was not a reasonable choice when they first took it, and that is what this proposal tries to solve.
I advise as much as I can against the supervised installation as I possible can on the forums.
I have run it myself for a couple of years and I switched to a VM with HAOS and a VM with my other services.
HAOS is easily maintained that way and the other stufff I want to do I can do in that other VM, which HA then just connects to.

Supervised installation looked nice until the switch to Bookworm and Python3.13 jumped up,
Many users do not see those tasks when they choose the supervised installation and it will come back and bite them.

This is your personal opinion based on your own experiences.
For me, it was the absolutely right decision. I run this setup since multiple years. I never even thought about reverse engineering Home Assistant or it was even necessary.

So for me, there is a super large change with the current decision. Please accept that and also that other will be affected similarly. Dont judge from your personal view on all others.

2 Likes

That is what this discussion is about. It will be up to users to come up with and maintain a supervised installer.

In regards to your comments about “Just keep running unsupported”, many people in this thread are currently running unsupported methods. That installer is not guaranteed to work in those unsupported configurations. It’s currently not even guaranteed to work during an OS update on the supported OS. I had to make sweeping changes during the Debian 11 → 12 update outside that install script. That’s why supervised is listed as the hardest install method. It’s not because installing is hard, it’s because maintaining it in a supported state is hard even with the current supplied tools.

So, when it comes to light that someone is running supervised in an unsupported fashion, that means they’ve already taken the risks to get it running now. And they should be willing to continue to take risks to keep it running if they want to use this install method (after it’s deprecated). I.e. Just keep running unsupported.

1 Like

I tried to explain above in detailed technical aspects why this recommendation is wrong for my personal situation.
I would have nearly followed this recommendation if I would not have digged myself into what the decision really means (as linked above, the answer is hidden in a comment of the architecture discussion).

That’s why I wanted to make it transparent here and ask for a bit more careful guidance from you.

But I clearly got your point.

You asked for feedback. I gave feedback on how you respond to peoples feedback. Up to you to consider or not.
I am good. I can and should stop this discussion at this point. :slight_smile:

a) if statistics reporting is opt-in, then real statistics may be heavily skewed vs those reports. How much? Don’t know. Maybe as big as Container.
b) depending on your country or region, a UPS may not help much. How big a UPS? How much runtime? This problem isn’t strictly necessarily solvable with a clean shutdown if the system is getting stuck on docker pull.

Some countries power outages are daily and internet outages seasonal [especially if political].

This has been covered previously in the posts above.

Which was why I asked if they had considered it as a solution. Not demanded that it is the solution.

1 Like

I‘m not affected with my HA. And I did not read all those 270+ comments. Just 3 things that came instantly to my mind when reading the OP I would like to share:

  1. I recently reinstalled another home server, bare metal. What a pain in the evening, so time-consuming. Took me 4 weekends and still fighting with few things. Several services, installed 12 years ago, poor documentation, only few migration documentation from a handful of services. Next to the fact I mostly like HA OS, for me going the Supervised (or even Core) path never was and is an option. Luckily I choose that path right from the beginning, even the hardware choice was a bad one (Pi 3B+ with only 1 GB of RAM, hell that was as unstable as possible). Props to everyone willing to invest the time in that underlying layer to actually run HA (Supervised/Core) instead of using that time to build great smart home stuff with HA.
  2. I‘m confused about the numbers referenced as facts: are they from the opt-in statistics? Maybe we don’t know the actual numbers, maybe some do because of components being pulled from download servers giving a good (better) indication, maybe they are almost the same. I think while it‘s really hard to verify / falsify those statistics, it‘s a matter of fact the virtualization, containerization and appliance-ization (not a real English word, right :wink: ) proved to outperform the rather oldschool methods.
  3. +1 for early migration hints. Likely as simple as „choose your new solution and restore a full backup“.
1 Like

I understand and agree with this. Maintaining old/barely used platforms is time consuming. It’s way better to use devs time and resources to enhance major platforms.

But, taking this opportunity, I know this has been talked about thousand of times in this forum, but i personally have never seen an oficial answer from HA team. Is there any way to have an Add-ons store on HA Container ? It would be awesome to run Home Assistant container and also benefit from it’s addons. I know docker is now standardized and it wouldn’t be too difficult to implement a docker orquestrator inside HA (which supervised already does, but has a lot of requirements and problems).

If we can run Home Assistant Container on any Linux distro within the requirements for hardware, and we can run probably if not all addons softwares using containers by ourselves, what is the huge problem preventing HA team to make HA Container launch side-by-side addons? A simple standardized docker-compose.yml file would be able to do that right?

If the problem is about file and directory mappings between containers, once more, it’s only matter of describing what needs to be shared for this to work. Docker supports volume sharing for both bind and mounted volumes.

The big thing here is that not everyone has a dedicated machine to run HA hence using HA Container specially from now on will be the best option. But then the user looses the addons. And if someone wants to run an addon “manually” it needs to build and configure a docker container by hand, and some addons won’t even have a “standalone” version, like Google Drive Backup for instance.

You have Docker Hub. Nearly all add-ons are just versions of these containers made to work with the supervisor.

1 Like

In Linux or Windows you can run HAOS, side-by-side with your other applications. Use KVM or LXC inside a Linux distro if you cannot invest in a dedicated machine.

The tools are already there to use, instead of inventing yet another management layer which has to be maintained by the HA team long term.

Less is more in any engineering endeavour (even in SW engineering).

But an year later when HA deorecate Debian 12 for Debian 13, then it crashes and a littl knowledge is not enough anymore and there will be no help, because that extra knowledge is needed to do the troubleshooting for data to determine the problem.
Later Pyrhon3.13 is deprecated and the issue is the same.

Those requirements are there to run the supervisor with its Docker containers as addons.

What could be considered as numbers for peolpe not sharing their statistics:

  • For insances that use the Nabu Casa login
  • The health statistics for each setup - such as these if any are “True” or “Ok”, those could just be counted as an instance:
    • Logged In false
    • Reach certificate server ok
    • Reach authentication server ok
    • Reach Home Assistant Cloud ok

I’ve been reading this thread with some amusement. It appears that many, (maybe most?) don’t really know how they are running Home Assistant.

You will never get more than a sample of usage because people are either paranoid about sharing analytics or don’t care. I don’t recall, is it an opt-out or an opt-in?

1 Like

It is an opt-in

1 Like

I am against opt-out. It should be disabled by default as it today.

4 Likes

Then nothing remains of the Open Home Foundation principles. If they go through with the proposed changes, they can scap ‘choice’ and ‘sustainability’. If they make statistics opt-out, they can scrap ‘privacy’ too.

2 Likes

Similar metrics are probably already considered, I assume that’s where the “we have 4 times the measured installation” number comes from, it doesn’t give you any details about that installation though.